00:00:00.000  Started by upstream project "autotest-per-patch" build number 132805
00:00:00.000  originally caused by:
00:00:00.000   Started by user sys_sgci
00:00:00.039  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:04.739  The recommended git tool is: git
00:00:04.740  using credential 00000000-0000-0000-0000-000000000002
00:00:04.741   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:04.752  Fetching changes from the remote Git repository
00:00:04.754   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:04.765  Using shallow fetch with depth 1
00:00:04.765  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:04.765   > git --version # timeout=10
00:00:04.774   > git --version # 'git version 2.39.2'
00:00:04.774  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:04.785  Setting http proxy: proxy-dmz.intel.com:911
00:00:04.785   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:10.747   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:10.761   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:10.773  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:10.773   > git config core.sparsecheckout # timeout=10
00:00:10.784   > git read-tree -mu HEAD # timeout=10
00:00:10.800   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:10.822  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:10.822   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:10.929  [Pipeline] Start of Pipeline
00:00:10.942  [Pipeline] library
00:00:10.944  Loading library shm_lib@master
00:00:10.944  Library shm_lib@master is cached. Copying from home.
00:00:10.958  [Pipeline] node
00:00:10.975  Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest
00:00:10.976  [Pipeline] {
00:00:10.985  [Pipeline] catchError
00:00:10.986  [Pipeline] {
00:00:10.996  [Pipeline] wrap
00:00:11.004  [Pipeline] {
00:00:11.012  [Pipeline] stage
00:00:11.013  [Pipeline] { (Prologue)
00:00:11.030  [Pipeline] echo
00:00:11.032  Node: VM-host-WFP1
00:00:11.038  [Pipeline] cleanWs
00:00:11.048  [WS-CLEANUP] Deleting project workspace...
00:00:11.048  [WS-CLEANUP] Deferred wipeout is used...
00:00:11.054  [WS-CLEANUP] done
00:00:11.320  [Pipeline] setCustomBuildProperty
00:00:11.390  [Pipeline] httpRequest
00:00:12.691  [Pipeline] echo
00:00:12.693  Sorcerer 10.211.164.112 is alive
00:00:12.701  [Pipeline] retry
00:00:12.703  [Pipeline] {
00:00:12.717  [Pipeline] httpRequest
00:00:12.721  HttpMethod: GET
00:00:12.722  URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:12.722  Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:12.735  Response Code: HTTP/1.1 200 OK
00:00:12.735  Success: Status code 200 is in the accepted range: 200,404
00:00:12.736  Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:18.876  [Pipeline] }
00:00:18.893  [Pipeline] // retry
00:00:18.901  [Pipeline] sh
00:00:19.189  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:19.204  [Pipeline] httpRequest
00:00:19.626  [Pipeline] echo
00:00:19.628  Sorcerer 10.211.164.112 is alive
00:00:19.638  [Pipeline] retry
00:00:19.640  [Pipeline] {
00:00:19.654  [Pipeline] httpRequest
00:00:19.659  HttpMethod: GET
00:00:19.660  URL: http://10.211.164.112/packages/spdk_6584139bf1f810d65390a8fc2baea3291bcf9e05.tar.gz
00:00:19.660  Sending request to url: http://10.211.164.112/packages/spdk_6584139bf1f810d65390a8fc2baea3291bcf9e05.tar.gz
00:00:19.673  Response Code: HTTP/1.1 200 OK
00:00:19.674  Success: Status code 200 is in the accepted range: 200,404
00:00:19.674  Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_6584139bf1f810d65390a8fc2baea3291bcf9e05.tar.gz
00:02:25.882  [Pipeline] }
00:02:25.899  [Pipeline] // retry
00:02:25.907  [Pipeline] sh
00:02:26.190  + tar --no-same-owner -xf spdk_6584139bf1f810d65390a8fc2baea3291bcf9e05.tar.gz
00:02:28.743  [Pipeline] sh
00:02:29.028  + git -C spdk log --oneline -n5
00:02:29.028  6584139bf build: use VERSION file for storing version
00:02:29.028  a5e6ecf28 lib/reduce: Data copy logic in thin read operations
00:02:29.028  a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair
00:02:29.028  2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting
00:02:29.028  e2dfdf06c accel/mlx5: Register post_poller handler
00:02:29.052  [Pipeline] writeFile
00:02:29.069  [Pipeline] sh
00:02:29.358  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:02:29.373  [Pipeline] sh
00:02:29.665  + cat autorun-spdk.conf
00:02:29.665  SPDK_RUN_FUNCTIONAL_TEST=1
00:02:29.665  SPDK_TEST_NVME=1
00:02:29.665  SPDK_TEST_FTL=1
00:02:29.665  SPDK_TEST_ISAL=1
00:02:29.665  SPDK_RUN_ASAN=1
00:02:29.665  SPDK_RUN_UBSAN=1
00:02:29.665  SPDK_TEST_XNVME=1
00:02:29.665  SPDK_TEST_NVME_FDP=1
00:02:29.665  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:29.673  RUN_NIGHTLY=0
00:02:29.675  [Pipeline] }
00:02:29.689  [Pipeline] // stage
00:02:29.706  [Pipeline] stage
00:02:29.708  [Pipeline] { (Run VM)
00:02:29.722  [Pipeline] sh
00:02:30.006  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:02:30.006  + echo 'Start stage prepare_nvme.sh'
00:02:30.006  Start stage prepare_nvme.sh
00:02:30.006  + [[ -n 5 ]]
00:02:30.006  + disk_prefix=ex5
00:02:30.006  + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]]
00:02:30.006  + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]]
00:02:30.006  + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf
00:02:30.006  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:30.006  ++ SPDK_TEST_NVME=1
00:02:30.006  ++ SPDK_TEST_FTL=1
00:02:30.006  ++ SPDK_TEST_ISAL=1
00:02:30.006  ++ SPDK_RUN_ASAN=1
00:02:30.006  ++ SPDK_RUN_UBSAN=1
00:02:30.006  ++ SPDK_TEST_XNVME=1
00:02:30.006  ++ SPDK_TEST_NVME_FDP=1
00:02:30.006  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:30.006  ++ RUN_NIGHTLY=0
00:02:30.006  + cd /var/jenkins/workspace/nvme-vg-autotest
00:02:30.006  + nvme_files=()
00:02:30.006  + declare -A nvme_files
00:02:30.006  + backend_dir=/var/lib/libvirt/images/backends
00:02:30.006  + nvme_files['nvme.img']=5G
00:02:30.006  + nvme_files['nvme-cmb.img']=5G
00:02:30.006  + nvme_files['nvme-multi0.img']=4G
00:02:30.006  + nvme_files['nvme-multi1.img']=4G
00:02:30.006  + nvme_files['nvme-multi2.img']=4G
00:02:30.006  + nvme_files['nvme-openstack.img']=8G
00:02:30.006  + nvme_files['nvme-zns.img']=5G
00:02:30.006  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:02:30.006  + ((  SPDK_TEST_FTL == 1  ))
00:02:30.006  + nvme_files["nvme-ftl.img"]=6G
00:02:30.006  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:02:30.006  + nvme_files["nvme-fdp.img"]=1G
00:02:30.006  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:02:30.006  + for nvme in "${!nvme_files[@]}"
00:02:30.006  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G
00:02:30.006  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:02:30.006  + for nvme in "${!nvme_files[@]}"
00:02:30.006  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G
00:02:30.006  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc
00:02:30.006  + for nvme in "${!nvme_files[@]}"
00:02:30.006  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G
00:02:30.006  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:02:30.006  + for nvme in "${!nvme_files[@]}"
00:02:30.006  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G
00:02:30.266  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:02:30.266  + for nvme in "${!nvme_files[@]}"
00:02:30.266  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G
00:02:30.266  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:02:30.266  + for nvme in "${!nvme_files[@]}"
00:02:30.266  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G
00:02:30.266  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:02:30.266  + for nvme in "${!nvme_files[@]}"
00:02:30.266  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G
00:02:30.266  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:02:30.266  + for nvme in "${!nvme_files[@]}"
00:02:30.266  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G
00:02:30.266  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc
00:02:30.266  + for nvme in "${!nvme_files[@]}"
00:02:30.266  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G
00:02:30.526  Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:02:30.526  ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu
00:02:30.526  + echo 'End stage prepare_nvme.sh'
00:02:30.526  End stage prepare_nvme.sh
00:02:30.540  [Pipeline] sh
00:02:30.830  + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:02:30.830  Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39
00:02:30.830  
00:02:30.830  DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant
00:02:30.830  SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk
00:02:30.830  VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest
00:02:30.830  HELP=0
00:02:30.830  DRY_RUN=0
00:02:30.830  NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,
00:02:30.830  NVME_DISKS_TYPE=nvme,nvme,nvme,nvme,
00:02:30.830  NVME_AUTO_CREATE=0
00:02:30.830  NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,,
00:02:30.830  NVME_CMB=,,,,
00:02:30.830  NVME_PMR=,,,,
00:02:30.830  NVME_ZNS=,,,,
00:02:30.830  NVME_MS=true,,,,
00:02:30.830  NVME_FDP=,,,on,
00:02:30.830  SPDK_VAGRANT_DISTRO=fedora39
00:02:30.830  SPDK_VAGRANT_VMCPU=10
00:02:30.830  SPDK_VAGRANT_VMRAM=12288
00:02:30.830  SPDK_VAGRANT_PROVIDER=libvirt
00:02:30.830  SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911
00:02:30.830  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:02:30.830  SPDK_OPENSTACK_NETWORK=0
00:02:30.830  VAGRANT_PACKAGE_BOX=0
00:02:30.830  VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile
00:02:30.830  FORCE_DISTRO=true
00:02:30.830  VAGRANT_BOX_VERSION=
00:02:30.830  EXTRA_VAGRANTFILES=
00:02:30.830  NIC_MODEL=e1000
00:02:30.830  
00:02:30.830  mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt'
00:02:30.830  /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest
00:02:33.362  Bringing machine 'default' up with 'libvirt' provider...
00:02:34.298  ==> default: Creating image (snapshot of base box volume).
00:02:34.556  ==> default: Creating domain with the following settings...
00:02:34.556  ==> default:  -- Name:              fedora39-39-1.5-1721788873-2326_default_1733760782_c5a3b5f4988b95fb4daf
00:02:34.556  ==> default:  -- Domain type:       kvm
00:02:34.556  ==> default:  -- Cpus:              10
00:02:34.556  ==> default:  -- Feature:           acpi
00:02:34.556  ==> default:  -- Feature:           apic
00:02:34.556  ==> default:  -- Feature:           pae
00:02:34.556  ==> default:  -- Memory:            12288M
00:02:34.556  ==> default:  -- Memory Backing:    hugepages: 
00:02:34.556  ==> default:  -- Management MAC:    
00:02:34.556  ==> default:  -- Loader:            
00:02:34.556  ==> default:  -- Nvram:             
00:02:34.556  ==> default:  -- Base box:          spdk/fedora39
00:02:34.556  ==> default:  -- Storage pool:      default
00:02:34.556  ==> default:  -- Image:             /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733760782_c5a3b5f4988b95fb4daf.img (20G)
00:02:34.556  ==> default:  -- Volume Cache:      default
00:02:34.556  ==> default:  -- Kernel:            
00:02:34.556  ==> default:  -- Initrd:            
00:02:34.556  ==> default:  -- Graphics Type:     vnc
00:02:34.556  ==> default:  -- Graphics Port:     -1
00:02:34.556  ==> default:  -- Graphics IP:       127.0.0.1
00:02:34.556  ==> default:  -- Graphics Password: Not defined
00:02:34.556  ==> default:  -- Video Type:        cirrus
00:02:34.556  ==> default:  -- Video VRAM:        9216
00:02:34.556  ==> default:  -- Sound Type:	
00:02:34.556  ==> default:  -- Keymap:            en-us
00:02:34.556  ==> default:  -- TPM Path:          
00:02:34.556  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:02:34.556  ==> default:  -- Command line args: 
00:02:34.556  ==> default:     -> value=-device, 
00:02:34.556  ==> default:     -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 
00:02:34.556  ==> default:     -> value=-drive, 
00:02:34.556  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 
00:02:34.556  ==> default:     -> value=-device, 
00:02:34.557  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 
00:02:34.557  ==> default:     -> value=-device, 
00:02:34.557  ==> default:     -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 
00:02:34.557  ==> default:     -> value=-drive, 
00:02:34.557  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 
00:02:34.557  ==> default:     -> value=-device, 
00:02:34.557  ==> default:     -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:34.557  ==> default:     -> value=-device, 
00:02:34.557  ==> default:     -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 
00:02:34.557  ==> default:     -> value=-drive, 
00:02:34.557  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 
00:02:34.557  ==> default:     -> value=-device, 
00:02:34.557  ==> default:     -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:34.557  ==> default:     -> value=-drive, 
00:02:34.557  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 
00:02:34.557  ==> default:     -> value=-device, 
00:02:34.557  ==> default:     -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:34.557  ==> default:     -> value=-drive, 
00:02:34.557  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 
00:02:34.557  ==> default:     -> value=-device, 
00:02:34.557  ==> default:     -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:34.557  ==> default:     -> value=-device, 
00:02:34.557  ==> default:     -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 
00:02:34.557  ==> default:     -> value=-device, 
00:02:34.557  ==> default:     -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 
00:02:34.557  ==> default:     -> value=-drive, 
00:02:34.557  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 
00:02:34.557  ==> default:     -> value=-device, 
00:02:34.557  ==> default:     -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:34.815  ==> default: Creating shared folders metadata...
00:02:34.815  ==> default: Starting domain.
00:02:36.720  ==> default: Waiting for domain to get an IP address...
00:02:54.840  ==> default: Waiting for SSH to become available...
00:02:54.840  ==> default: Configuring and enabling network interfaces...
00:02:59.034      default: SSH address: 192.168.121.56:22
00:02:59.034      default: SSH username: vagrant
00:02:59.034      default: SSH auth method: private key
00:03:02.326  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk
00:03:10.466  ==> default: Mounting SSHFS shared folder...
00:03:13.004  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output
00:03:13.004  ==> default: Checking Mount..
00:03:14.383  ==> default: Folder Successfully Mounted!
00:03:14.383  ==> default: Running provisioner: file...
00:03:15.772      default: ~/.gitconfig => .gitconfig
00:03:16.377  
00:03:16.377    SUCCESS!
00:03:16.377  
00:03:16.377    cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use.
00:03:16.377    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:03:16.377    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm.
00:03:16.377  
00:03:16.385  [Pipeline] }
00:03:16.401  [Pipeline] // stage
00:03:16.410  [Pipeline] dir
00:03:16.410  Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt
00:03:16.412  [Pipeline] {
00:03:16.423  [Pipeline] catchError
00:03:16.425  [Pipeline] {
00:03:16.438  [Pipeline] sh
00:03:16.721  + vagrant ssh-config --host vagrant
00:03:16.721  + sed -ne /^Host/,$p
00:03:16.721  + tee ssh_conf
00:03:20.006  Host vagrant
00:03:20.006    HostName 192.168.121.56
00:03:20.006    User vagrant
00:03:20.006    Port 22
00:03:20.006    UserKnownHostsFile /dev/null
00:03:20.006    StrictHostKeyChecking no
00:03:20.006    PasswordAuthentication no
00:03:20.006    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39
00:03:20.006    IdentitiesOnly yes
00:03:20.006    LogLevel FATAL
00:03:20.006    ForwardAgent yes
00:03:20.006    ForwardX11 yes
00:03:20.006  
00:03:20.020  [Pipeline] withEnv
00:03:20.023  [Pipeline] {
00:03:20.036  [Pipeline] sh
00:03:20.318  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash
00:03:20.318  		source /etc/os-release
00:03:20.318  		[[ -e /image.version ]] && img=$(< /image.version)
00:03:20.318  		# Minimal, systemd-like check.
00:03:20.318  		if [[ -e /.dockerenv ]]; then
00:03:20.318  			# Clear garbage from the node's name:
00:03:20.318  			#  agt-er_autotest_547-896 -> autotest_547-896
00:03:20.318  			#  $HOSTNAME is the actual container id
00:03:20.318  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:03:20.318  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:03:20.318  				# We can assume this is a mount from a host where container is running,
00:03:20.318  				# so fetch its hostname to easily identify the target swarm worker.
00:03:20.318  				container="$(< /etc/hostname) ($agent)"
00:03:20.318  			else
00:03:20.318  				# Fallback
00:03:20.318  				container=$agent
00:03:20.318  			fi
00:03:20.318  		fi
00:03:20.318  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:03:20.318  
00:03:20.589  [Pipeline] }
00:03:20.605  [Pipeline] // withEnv
00:03:20.614  [Pipeline] setCustomBuildProperty
00:03:20.628  [Pipeline] stage
00:03:20.630  [Pipeline] { (Tests)
00:03:20.646  [Pipeline] sh
00:03:20.930  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:03:21.203  [Pipeline] sh
00:03:21.485  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:03:21.758  [Pipeline] timeout
00:03:21.758  Timeout set to expire in 50 min
00:03:21.760  [Pipeline] {
00:03:21.774  [Pipeline] sh
00:03:22.056  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard
00:03:22.624  HEAD is now at 6584139bf build: use VERSION file for storing version
00:03:22.635  [Pipeline] sh
00:03:22.918  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo
00:03:23.191  [Pipeline] sh
00:03:23.472  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:03:23.747  [Pipeline] sh
00:03:24.028  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo
00:03:24.287  ++ readlink -f spdk_repo
00:03:24.287  + DIR_ROOT=/home/vagrant/spdk_repo
00:03:24.287  + [[ -n /home/vagrant/spdk_repo ]]
00:03:24.287  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:03:24.287  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:03:24.287  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:03:24.287  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:03:24.287  + [[ -d /home/vagrant/spdk_repo/output ]]
00:03:24.287  + [[ nvme-vg-autotest == pkgdep-* ]]
00:03:24.287  + cd /home/vagrant/spdk_repo
00:03:24.287  + source /etc/os-release
00:03:24.287  ++ NAME='Fedora Linux'
00:03:24.287  ++ VERSION='39 (Cloud Edition)'
00:03:24.287  ++ ID=fedora
00:03:24.287  ++ VERSION_ID=39
00:03:24.287  ++ VERSION_CODENAME=
00:03:24.287  ++ PLATFORM_ID=platform:f39
00:03:24.287  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:03:24.287  ++ ANSI_COLOR='0;38;2;60;110;180'
00:03:24.287  ++ LOGO=fedora-logo-icon
00:03:24.287  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:03:24.287  ++ HOME_URL=https://fedoraproject.org/
00:03:24.287  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:03:24.287  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:03:24.287  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:03:24.287  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:03:24.287  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:03:24.287  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:03:24.287  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:03:24.287  ++ SUPPORT_END=2024-11-12
00:03:24.287  ++ VARIANT='Cloud Edition'
00:03:24.287  ++ VARIANT_ID=cloud
00:03:24.287  + uname -a
00:03:24.287  Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:03:24.287  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:03:24.868  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:03:25.127  Hugepages
00:03:25.127  node     hugesize     free /  total
00:03:25.127  node0   1048576kB        0 /      0
00:03:25.127  node0      2048kB        0 /      0
00:03:25.127  
00:03:25.127  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:03:25.127  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:03:25.127  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:03:25.127  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1
00:03:25.127  NVMe                      0000:00:12.0    1b36   0010   unknown nvme             nvme2      nvme2n1 nvme2n2 nvme2n3
00:03:25.127  NVMe                      0000:00:13.0    1b36   0010   unknown nvme             nvme3      nvme3n1
00:03:25.127  + rm -f /tmp/spdk-ld-path
00:03:25.127  + source autorun-spdk.conf
00:03:25.127  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:03:25.127  ++ SPDK_TEST_NVME=1
00:03:25.127  ++ SPDK_TEST_FTL=1
00:03:25.127  ++ SPDK_TEST_ISAL=1
00:03:25.127  ++ SPDK_RUN_ASAN=1
00:03:25.127  ++ SPDK_RUN_UBSAN=1
00:03:25.127  ++ SPDK_TEST_XNVME=1
00:03:25.127  ++ SPDK_TEST_NVME_FDP=1
00:03:25.127  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:03:25.127  ++ RUN_NIGHTLY=0
00:03:25.127  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:03:25.127  + [[ -n '' ]]
00:03:25.127  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:03:25.387  + for M in /var/spdk/build-*-manifest.txt
00:03:25.387  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:03:25.387  + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/
00:03:25.387  + for M in /var/spdk/build-*-manifest.txt
00:03:25.387  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:03:25.387  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:03:25.387  + for M in /var/spdk/build-*-manifest.txt
00:03:25.387  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:03:25.387  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:03:25.387  ++ uname
00:03:25.387  + [[ Linux == \L\i\n\u\x ]]
00:03:25.387  + sudo dmesg -T
00:03:25.387  + sudo dmesg --clear
00:03:25.387  + dmesg_pid=5241
00:03:25.387  + [[ Fedora Linux == FreeBSD ]]
00:03:25.387  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:03:25.387  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:03:25.387  + sudo dmesg -Tw
00:03:25.387  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:03:25.387  + [[ -x /usr/src/fio-static/fio ]]
00:03:25.387  + export FIO_BIN=/usr/src/fio-static/fio
00:03:25.387  + FIO_BIN=/usr/src/fio-static/fio
00:03:25.387  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:03:25.387  + [[ ! -v VFIO_QEMU_BIN ]]
00:03:25.387  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:03:25.387  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:03:25.387  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:03:25.387  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:03:25.387  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:03:25.387  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:03:25.387  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:03:25.387    16:13:54  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:03:25.387   16:13:54  -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf
00:03:25.387    16:13:54  -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:03:25.387    16:13:54  -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1
00:03:25.387    16:13:54  -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1
00:03:25.387    16:13:54  -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1
00:03:25.387    16:13:54  -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1
00:03:25.387    16:13:54  -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1
00:03:25.387    16:13:54  -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1
00:03:25.387    16:13:54  -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1
00:03:25.387    16:13:54  -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:03:25.387    16:13:54  -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0
00:03:25.387   16:13:54  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:03:25.387   16:13:54  -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:03:25.646  Traceback (most recent call last):
00:03:25.646    File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in <module>
00:03:25.646      import spdk.rpc as rpc  # noqa
00:03:25.646      ^^^^^^^^^^^^^^^^^^^^^^
00:03:25.646    File "/home/vagrant/spdk_repo/spdk/python/spdk/__init__.py", line 5, in <module>
00:03:25.646      from .version import __version__
00:03:25.646  ModuleNotFoundError: No module named 'spdk.version'
00:03:25.646     16:13:54  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:03:25.646    16:13:54  -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:03:25.646     16:13:54  -- scripts/common.sh@15 -- $ shopt -s extglob
00:03:25.646     16:13:54  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:03:25.646     16:13:54  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:03:25.646     16:13:54  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:03:25.646      16:13:54  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:25.646      16:13:54  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:25.646      16:13:54  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:25.646      16:13:54  -- paths/export.sh@5 -- $ export PATH
00:03:25.646      16:13:54  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:25.646    16:13:54  -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:03:25.646      16:13:54  -- common/autobuild_common.sh@493 -- $ date +%s
00:03:25.646     16:13:54  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733760834.XXXXXX
00:03:25.646    16:13:54  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733760834.x8jOaw
00:03:25.646    16:13:54  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:03:25.646    16:13:54  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:03:25.646    16:13:54  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/'
00:03:25.646    16:13:54  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:03:25.646    16:13:54  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:03:25.646     16:13:54  -- common/autobuild_common.sh@509 -- $ get_config_params
00:03:25.646     16:13:54  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:03:25.646     16:13:54  -- common/autotest_common.sh@10 -- $ set +x
00:03:25.646    16:13:54  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme'
00:03:25.646    16:13:54  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:03:25.646    16:13:54  -- pm/common@17 -- $ local monitor
00:03:25.646    16:13:54  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:25.646    16:13:54  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:25.646    16:13:54  -- pm/common@25 -- $ sleep 1
00:03:25.646     16:13:54  -- pm/common@21 -- $ date +%s
00:03:25.646     16:13:54  -- pm/common@21 -- $ date +%s
00:03:25.646    16:13:54  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733760834
00:03:25.646    16:13:54  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733760834
00:03:25.646  Traceback (most recent call last):
00:03:25.646    File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in <module>
00:03:25.646      import spdk.rpc as rpc  # noqa
00:03:25.646      ^^^^^^^^^^^^^^^^^^^^^^
00:03:25.646    File "/home/vagrant/spdk_repo/spdk/python/spdk/__init__.py", line 5, in <module>
00:03:25.646      from .version import __version__
00:03:25.646  ModuleNotFoundError: No module named 'spdk.version'
00:03:25.646  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733760834_collect-vmstat.pm.log
00:03:25.646  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733760834_collect-cpu-load.pm.log
00:03:26.582    16:13:55  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:03:26.582   16:13:55  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:03:26.582   16:13:55  -- spdk/autobuild.sh@12 -- $ umask 022
00:03:26.582   16:13:55  -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:03:26.582   16:13:55  -- spdk/autobuild.sh@16 -- $ date -u
00:03:26.582  Mon Dec  9 04:13:55 PM UTC 2024
00:03:26.582   16:13:55  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:03:26.582  v25.01-pre-304-g6584139bf
00:03:26.582   16:13:55  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:03:26.582   16:13:55  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:03:26.582   16:13:55  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:03:26.582   16:13:55  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:03:26.582   16:13:55  -- common/autotest_common.sh@10 -- $ set +x
00:03:26.582  ************************************
00:03:26.582  START TEST asan
00:03:26.582  ************************************
00:03:26.582  using asan
00:03:26.582   16:13:55 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:03:26.582  
00:03:26.582  real	0m0.001s
00:03:26.582  user	0m0.000s
00:03:26.582  sys	0m0.000s
00:03:26.582   16:13:55 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:03:26.582  ************************************
00:03:26.582   16:13:55 asan -- common/autotest_common.sh@10 -- $ set +x
00:03:26.582  END TEST asan
00:03:26.582  ************************************
00:03:26.841   16:13:55  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:03:26.841   16:13:55  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:03:26.841   16:13:55  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:03:26.841   16:13:55  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:03:26.841   16:13:55  -- common/autotest_common.sh@10 -- $ set +x
00:03:26.841  ************************************
00:03:26.841  START TEST ubsan
00:03:26.841  ************************************
00:03:26.841  using ubsan
00:03:26.841   16:13:55 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:03:26.841  
00:03:26.841  real	0m0.000s
00:03:26.841  user	0m0.000s
00:03:26.841  sys	0m0.000s
00:03:26.841   16:13:55 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:03:26.841   16:13:55 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:03:26.841  ************************************
00:03:26.841  END TEST ubsan
00:03:26.841  ************************************
00:03:26.841   16:13:55  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:03:26.841   16:13:55  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:03:26.841   16:13:55  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:03:26.841   16:13:55  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:03:26.841   16:13:55  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:03:26.841   16:13:55  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:03:26.841   16:13:55  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:03:26.841   16:13:55  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:03:26.841   16:13:55  -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared
00:03:27.136  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:03:27.136  Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build
00:03:27.420  Using 'verbs' RDMA provider
00:03:43.681  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done.
00:04:01.803  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done.
00:04:01.803  Creating mk/config.mk...done.
00:04:01.803  Creating mk/cc.flags.mk...done.
00:04:01.803  Type 'make' to build.
00:04:01.803   16:14:28  -- spdk/autobuild.sh@70 -- $ run_test make make -j10
00:04:01.803   16:14:28  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:04:01.803   16:14:28  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:04:01.803   16:14:28  -- common/autotest_common.sh@10 -- $ set +x
00:04:01.803  ************************************
00:04:01.803  START TEST make
00:04:01.803  ************************************
00:04:01.804   16:14:28 make -- common/autotest_common.sh@1129 -- $ make -j10
00:04:01.804  (cd /home/vagrant/spdk_repo/spdk/xnvme && \
00:04:01.804  	export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \
00:04:01.804  	meson setup builddir \
00:04:01.804  	-Dwith-libaio=enabled \
00:04:01.804  	-Dwith-liburing=enabled \
00:04:01.804  	-Dwith-libvfn=disabled \
00:04:01.804  	-Dwith-spdk=disabled \
00:04:01.804  	-Dexamples=false \
00:04:01.804  	-Dtests=false \
00:04:01.804  	-Dtools=false && \
00:04:01.804  	meson compile -C builddir && \
00:04:01.804  	cd -)
00:04:02.370  The Meson build system
00:04:02.370  Version: 1.5.0
00:04:02.370  Source dir: /home/vagrant/spdk_repo/spdk/xnvme
00:04:02.370  Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir
00:04:02.370  Build type: native build
00:04:02.370  Project name: xnvme
00:04:02.370  Project version: 0.7.5
00:04:02.370  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:04:02.370  C linker for the host machine: cc ld.bfd 2.40-14
00:04:02.370  Host machine cpu family: x86_64
00:04:02.370  Host machine cpu: x86_64
00:04:02.370  Message: host_machine.system: linux
00:04:02.370  Compiler for C supports arguments -Wno-missing-braces: YES 
00:04:02.370  Compiler for C supports arguments -Wno-cast-function-type: YES 
00:04:02.370  Compiler for C supports arguments -Wno-strict-aliasing: YES 
00:04:02.370  Run-time dependency threads found: YES
00:04:02.370  Has header "setupapi.h" : NO 
00:04:02.370  Has header "linux/blkzoned.h" : YES 
00:04:02.370  Has header "linux/blkzoned.h" : YES (cached)
00:04:02.370  Has header "libaio.h" : YES 
00:04:02.370  Library aio found: YES
00:04:02.370  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:04:02.370  Run-time dependency liburing found: YES 2.2
00:04:02.370  Dependency libvfn skipped: feature with-libvfn disabled
00:04:02.370  Found CMake: /usr/bin/cmake (3.27.7)
00:04:02.370  Run-time dependency libisal found: NO (tried pkgconfig and cmake)
00:04:02.370  Subproject spdk : skipped: feature with-spdk disabled
00:04:02.370  Run-time dependency appleframeworks found: NO (tried framework)
00:04:02.370  Run-time dependency appleframeworks found: NO (tried framework)
00:04:02.370  Library rt found: YES
00:04:02.370  Checking for function "clock_gettime" with dependency -lrt: YES 
00:04:02.370  Configuring xnvme_config.h using configuration
00:04:02.370  Configuring xnvme.spec using configuration
00:04:02.370  Run-time dependency bash-completion found: YES 2.11
00:04:02.370  Message: Bash-completions: /usr/share/bash-completion/completions
00:04:02.370  Program cp found: YES (/usr/bin/cp)
00:04:02.370  Build targets in project: 3
00:04:02.370  
00:04:02.370  xnvme 0.7.5
00:04:02.370  
00:04:02.371    Subprojects
00:04:02.371      spdk         : NO Feature 'with-spdk' disabled
00:04:02.371  
00:04:02.371    User defined options
00:04:02.371      examples     : false
00:04:02.371      tests        : false
00:04:02.371      tools        : false
00:04:02.371      with-libaio  : enabled
00:04:02.371      with-liburing: enabled
00:04:02.371      with-libvfn  : disabled
00:04:02.371      with-spdk    : disabled
00:04:02.371  
00:04:02.371  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:04:02.939  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir'
00:04:02.939  [1/76] Generating toolbox/xnvme-driver-script with a custom command
00:04:02.939  [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o
00:04:02.939  [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o
00:04:02.939  [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o
00:04:02.939  [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o
00:04:02.939  [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o
00:04:02.939  [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o
00:04:02.939  [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o
00:04:02.939  [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o
00:04:02.939  [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o
00:04:02.939  [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o
00:04:02.939  [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o
00:04:03.198  [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o
00:04:03.199  [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o
00:04:03.199  [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o
00:04:03.199  [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o
00:04:03.199  [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o
00:04:03.199  [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o
00:04:03.199  [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o
00:04:03.199  [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o
00:04:03.199  [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o
00:04:03.199  [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o
00:04:03.199  [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o
00:04:03.199  [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o
00:04:03.199  [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o
00:04:03.199  [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o
00:04:03.199  [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o
00:04:03.199  [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o
00:04:03.199  [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o
00:04:03.199  [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o
00:04:03.199  [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o
00:04:03.199  [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o
00:04:03.199  [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o
00:04:03.199  [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o
00:04:03.199  [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o
00:04:03.199  [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o
00:04:03.199  [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o
00:04:03.199  [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o
00:04:03.199  [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o
00:04:03.199  [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o
00:04:03.199  [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o
00:04:03.199  [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o
00:04:03.199  [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o
00:04:03.199  [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o
00:04:03.458  [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o
00:04:03.458  [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o
00:04:03.458  [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o
00:04:03.458  [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o
00:04:03.458  [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o
00:04:03.458  [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o
00:04:03.458  [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o
00:04:03.458  [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o
00:04:03.458  [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o
00:04:03.458  [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o
00:04:03.458  [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o
00:04:03.458  [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o
00:04:03.458  [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o
00:04:03.458  [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o
00:04:03.458  [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o
00:04:03.458  [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o
00:04:03.458  [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o
00:04:03.458  [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o
00:04:03.458  [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o
00:04:03.458  [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o
00:04:03.458  [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o
00:04:03.718  [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o
00:04:03.718  [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o
00:04:03.718  [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o
00:04:03.718  [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o
00:04:03.718  [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o
00:04:03.718  [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o
00:04:03.718  [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o
00:04:03.718  [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o
00:04:03.978  [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o
00:04:03.978  [75/76] Linking static target lib/libxnvme.a
00:04:03.978  [76/76] Linking target lib/libxnvme.so.0.7.5
00:04:03.978  INFO: autodetecting backend as ninja
00:04:03.978  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir
00:04:03.978  /home/vagrant/spdk_repo/spdk/xnvmebuild
00:04:12.109  The Meson build system
00:04:12.109  Version: 1.5.0
00:04:12.109  Source dir: /home/vagrant/spdk_repo/spdk/dpdk
00:04:12.109  Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp
00:04:12.109  Build type: native build
00:04:12.109  Program cat found: YES (/usr/bin/cat)
00:04:12.109  Project name: DPDK
00:04:12.109  Project version: 24.03.0
00:04:12.109  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:04:12.109  C linker for the host machine: cc ld.bfd 2.40-14
00:04:12.109  Host machine cpu family: x86_64
00:04:12.109  Host machine cpu: x86_64
00:04:12.109  Message: ## Building in Developer Mode ##
00:04:12.109  Program pkg-config found: YES (/usr/bin/pkg-config)
00:04:12.109  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh)
00:04:12.109  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:04:12.109  Program python3 found: YES (/usr/bin/python3)
00:04:12.109  Program cat found: YES (/usr/bin/cat)
00:04:12.109  Compiler for C supports arguments -march=native: YES 
00:04:12.109  Checking for size of "void *" : 8 
00:04:12.109  Checking for size of "void *" : 8 (cached)
00:04:12.109  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:04:12.109  Library m found: YES
00:04:12.109  Library numa found: YES
00:04:12.109  Has header "numaif.h" : YES 
00:04:12.109  Library fdt found: NO
00:04:12.109  Library execinfo found: NO
00:04:12.109  Has header "execinfo.h" : YES 
00:04:12.109  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:04:12.109  Run-time dependency libarchive found: NO (tried pkgconfig)
00:04:12.109  Run-time dependency libbsd found: NO (tried pkgconfig)
00:04:12.109  Run-time dependency jansson found: NO (tried pkgconfig)
00:04:12.109  Run-time dependency openssl found: YES 3.1.1
00:04:12.109  Run-time dependency libpcap found: YES 1.10.4
00:04:12.109  Has header "pcap.h" with dependency libpcap: YES 
00:04:12.109  Compiler for C supports arguments -Wcast-qual: YES 
00:04:12.109  Compiler for C supports arguments -Wdeprecated: YES 
00:04:12.109  Compiler for C supports arguments -Wformat: YES 
00:04:12.109  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:04:12.109  Compiler for C supports arguments -Wformat-security: NO 
00:04:12.109  Compiler for C supports arguments -Wmissing-declarations: YES 
00:04:12.109  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:04:12.109  Compiler for C supports arguments -Wnested-externs: YES 
00:04:12.109  Compiler for C supports arguments -Wold-style-definition: YES 
00:04:12.109  Compiler for C supports arguments -Wpointer-arith: YES 
00:04:12.109  Compiler for C supports arguments -Wsign-compare: YES 
00:04:12.109  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:04:12.109  Compiler for C supports arguments -Wundef: YES 
00:04:12.109  Compiler for C supports arguments -Wwrite-strings: YES 
00:04:12.109  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:04:12.109  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:04:12.109  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:04:12.109  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:04:12.109  Program objdump found: YES (/usr/bin/objdump)
00:04:12.109  Compiler for C supports arguments -mavx512f: YES 
00:04:12.109  Checking if "AVX512 checking" compiles: YES 
00:04:12.109  Fetching value of define "__SSE4_2__" : 1 
00:04:12.109  Fetching value of define "__AES__" : 1 
00:04:12.109  Fetching value of define "__AVX__" : 1 
00:04:12.109  Fetching value of define "__AVX2__" : 1 
00:04:12.109  Fetching value of define "__AVX512BW__" : 1 
00:04:12.109  Fetching value of define "__AVX512CD__" : 1 
00:04:12.109  Fetching value of define "__AVX512DQ__" : 1 
00:04:12.109  Fetching value of define "__AVX512F__" : 1 
00:04:12.109  Fetching value of define "__AVX512VL__" : 1 
00:04:12.109  Fetching value of define "__PCLMUL__" : 1 
00:04:12.109  Fetching value of define "__RDRND__" : 1 
00:04:12.109  Fetching value of define "__RDSEED__" : 1 
00:04:12.109  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:04:12.109  Fetching value of define "__znver1__" : (undefined) 
00:04:12.109  Fetching value of define "__znver2__" : (undefined) 
00:04:12.110  Fetching value of define "__znver3__" : (undefined) 
00:04:12.110  Fetching value of define "__znver4__" : (undefined) 
00:04:12.110  Library asan found: YES
00:04:12.110  Compiler for C supports arguments -Wno-format-truncation: YES 
00:04:12.110  Message: lib/log: Defining dependency "log"
00:04:12.110  Message: lib/kvargs: Defining dependency "kvargs"
00:04:12.110  Message: lib/telemetry: Defining dependency "telemetry"
00:04:12.110  Library rt found: YES
00:04:12.110  Checking for function "getentropy" : NO 
00:04:12.110  Message: lib/eal: Defining dependency "eal"
00:04:12.110  Message: lib/ring: Defining dependency "ring"
00:04:12.110  Message: lib/rcu: Defining dependency "rcu"
00:04:12.110  Message: lib/mempool: Defining dependency "mempool"
00:04:12.110  Message: lib/mbuf: Defining dependency "mbuf"
00:04:12.110  Fetching value of define "__PCLMUL__" : 1 (cached)
00:04:12.110  Fetching value of define "__AVX512F__" : 1 (cached)
00:04:12.110  Fetching value of define "__AVX512BW__" : 1 (cached)
00:04:12.110  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:04:12.110  Fetching value of define "__AVX512VL__" : 1 (cached)
00:04:12.110  Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached)
00:04:12.110  Compiler for C supports arguments -mpclmul: YES 
00:04:12.110  Compiler for C supports arguments -maes: YES 
00:04:12.110  Compiler for C supports arguments -mavx512f: YES (cached)
00:04:12.110  Compiler for C supports arguments -mavx512bw: YES 
00:04:12.110  Compiler for C supports arguments -mavx512dq: YES 
00:04:12.110  Compiler for C supports arguments -mavx512vl: YES 
00:04:12.110  Compiler for C supports arguments -mvpclmulqdq: YES 
00:04:12.110  Compiler for C supports arguments -mavx2: YES 
00:04:12.110  Compiler for C supports arguments -mavx: YES 
00:04:12.110  Message: lib/net: Defining dependency "net"
00:04:12.110  Message: lib/meter: Defining dependency "meter"
00:04:12.110  Message: lib/ethdev: Defining dependency "ethdev"
00:04:12.110  Message: lib/pci: Defining dependency "pci"
00:04:12.110  Message: lib/cmdline: Defining dependency "cmdline"
00:04:12.110  Message: lib/hash: Defining dependency "hash"
00:04:12.110  Message: lib/timer: Defining dependency "timer"
00:04:12.110  Message: lib/compressdev: Defining dependency "compressdev"
00:04:12.110  Message: lib/cryptodev: Defining dependency "cryptodev"
00:04:12.110  Message: lib/dmadev: Defining dependency "dmadev"
00:04:12.110  Compiler for C supports arguments -Wno-cast-qual: YES 
00:04:12.110  Message: lib/power: Defining dependency "power"
00:04:12.110  Message: lib/reorder: Defining dependency "reorder"
00:04:12.110  Message: lib/security: Defining dependency "security"
00:04:12.110  Has header "linux/userfaultfd.h" : YES 
00:04:12.110  Has header "linux/vduse.h" : YES 
00:04:12.110  Message: lib/vhost: Defining dependency "vhost"
00:04:12.110  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:04:12.110  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:04:12.110  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:04:12.110  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:04:12.110  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:04:12.110  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:04:12.110  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:04:12.110  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:04:12.110  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:04:12.110  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:04:12.110  Program doxygen found: YES (/usr/local/bin/doxygen)
00:04:12.110  Configuring doxy-api-html.conf using configuration
00:04:12.110  Configuring doxy-api-man.conf using configuration
00:04:12.110  Program mandb found: YES (/usr/bin/mandb)
00:04:12.110  Program sphinx-build found: NO
00:04:12.110  Configuring rte_build_config.h using configuration
00:04:12.110  Message: 
00:04:12.110  =================
00:04:12.110  Applications Enabled
00:04:12.110  =================
00:04:12.110  
00:04:12.110  apps:
00:04:12.110  	
00:04:12.110  
00:04:12.110  Message: 
00:04:12.110  =================
00:04:12.110  Libraries Enabled
00:04:12.110  =================
00:04:12.110  
00:04:12.110  libs:
00:04:12.110  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:04:12.110  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:04:12.110  	cryptodev, dmadev, power, reorder, security, vhost, 
00:04:12.110  
00:04:12.110  Message: 
00:04:12.110  ===============
00:04:12.110  Drivers Enabled
00:04:12.110  ===============
00:04:12.110  
00:04:12.110  common:
00:04:12.110  	
00:04:12.110  bus:
00:04:12.110  	pci, vdev, 
00:04:12.110  mempool:
00:04:12.110  	ring, 
00:04:12.110  dma:
00:04:12.110  	
00:04:12.110  net:
00:04:12.110  	
00:04:12.110  crypto:
00:04:12.110  	
00:04:12.110  compress:
00:04:12.110  	
00:04:12.110  vdpa:
00:04:12.110  	
00:04:12.110  
00:04:12.110  Message: 
00:04:12.110  =================
00:04:12.110  Content Skipped
00:04:12.110  =================
00:04:12.110  
00:04:12.110  apps:
00:04:12.110  	dumpcap:	explicitly disabled via build config
00:04:12.110  	graph:	explicitly disabled via build config
00:04:12.110  	pdump:	explicitly disabled via build config
00:04:12.110  	proc-info:	explicitly disabled via build config
00:04:12.110  	test-acl:	explicitly disabled via build config
00:04:12.110  	test-bbdev:	explicitly disabled via build config
00:04:12.110  	test-cmdline:	explicitly disabled via build config
00:04:12.110  	test-compress-perf:	explicitly disabled via build config
00:04:12.110  	test-crypto-perf:	explicitly disabled via build config
00:04:12.110  	test-dma-perf:	explicitly disabled via build config
00:04:12.110  	test-eventdev:	explicitly disabled via build config
00:04:12.110  	test-fib:	explicitly disabled via build config
00:04:12.110  	test-flow-perf:	explicitly disabled via build config
00:04:12.110  	test-gpudev:	explicitly disabled via build config
00:04:12.110  	test-mldev:	explicitly disabled via build config
00:04:12.110  	test-pipeline:	explicitly disabled via build config
00:04:12.110  	test-pmd:	explicitly disabled via build config
00:04:12.110  	test-regex:	explicitly disabled via build config
00:04:12.110  	test-sad:	explicitly disabled via build config
00:04:12.110  	test-security-perf:	explicitly disabled via build config
00:04:12.110  	
00:04:12.110  libs:
00:04:12.110  	argparse:	explicitly disabled via build config
00:04:12.110  	metrics:	explicitly disabled via build config
00:04:12.110  	acl:	explicitly disabled via build config
00:04:12.110  	bbdev:	explicitly disabled via build config
00:04:12.110  	bitratestats:	explicitly disabled via build config
00:04:12.110  	bpf:	explicitly disabled via build config
00:04:12.110  	cfgfile:	explicitly disabled via build config
00:04:12.110  	distributor:	explicitly disabled via build config
00:04:12.110  	efd:	explicitly disabled via build config
00:04:12.110  	eventdev:	explicitly disabled via build config
00:04:12.110  	dispatcher:	explicitly disabled via build config
00:04:12.110  	gpudev:	explicitly disabled via build config
00:04:12.110  	gro:	explicitly disabled via build config
00:04:12.110  	gso:	explicitly disabled via build config
00:04:12.110  	ip_frag:	explicitly disabled via build config
00:04:12.110  	jobstats:	explicitly disabled via build config
00:04:12.110  	latencystats:	explicitly disabled via build config
00:04:12.110  	lpm:	explicitly disabled via build config
00:04:12.110  	member:	explicitly disabled via build config
00:04:12.110  	pcapng:	explicitly disabled via build config
00:04:12.110  	rawdev:	explicitly disabled via build config
00:04:12.110  	regexdev:	explicitly disabled via build config
00:04:12.110  	mldev:	explicitly disabled via build config
00:04:12.110  	rib:	explicitly disabled via build config
00:04:12.110  	sched:	explicitly disabled via build config
00:04:12.110  	stack:	explicitly disabled via build config
00:04:12.110  	ipsec:	explicitly disabled via build config
00:04:12.110  	pdcp:	explicitly disabled via build config
00:04:12.110  	fib:	explicitly disabled via build config
00:04:12.110  	port:	explicitly disabled via build config
00:04:12.110  	pdump:	explicitly disabled via build config
00:04:12.110  	table:	explicitly disabled via build config
00:04:12.110  	pipeline:	explicitly disabled via build config
00:04:12.110  	graph:	explicitly disabled via build config
00:04:12.110  	node:	explicitly disabled via build config
00:04:12.110  	
00:04:12.110  drivers:
00:04:12.110  	common/cpt:	not in enabled drivers build config
00:04:12.110  	common/dpaax:	not in enabled drivers build config
00:04:12.110  	common/iavf:	not in enabled drivers build config
00:04:12.110  	common/idpf:	not in enabled drivers build config
00:04:12.110  	common/ionic:	not in enabled drivers build config
00:04:12.110  	common/mvep:	not in enabled drivers build config
00:04:12.110  	common/octeontx:	not in enabled drivers build config
00:04:12.110  	bus/auxiliary:	not in enabled drivers build config
00:04:12.110  	bus/cdx:	not in enabled drivers build config
00:04:12.110  	bus/dpaa:	not in enabled drivers build config
00:04:12.110  	bus/fslmc:	not in enabled drivers build config
00:04:12.110  	bus/ifpga:	not in enabled drivers build config
00:04:12.110  	bus/platform:	not in enabled drivers build config
00:04:12.110  	bus/uacce:	not in enabled drivers build config
00:04:12.110  	bus/vmbus:	not in enabled drivers build config
00:04:12.110  	common/cnxk:	not in enabled drivers build config
00:04:12.110  	common/mlx5:	not in enabled drivers build config
00:04:12.111  	common/nfp:	not in enabled drivers build config
00:04:12.111  	common/nitrox:	not in enabled drivers build config
00:04:12.111  	common/qat:	not in enabled drivers build config
00:04:12.111  	common/sfc_efx:	not in enabled drivers build config
00:04:12.111  	mempool/bucket:	not in enabled drivers build config
00:04:12.111  	mempool/cnxk:	not in enabled drivers build config
00:04:12.111  	mempool/dpaa:	not in enabled drivers build config
00:04:12.111  	mempool/dpaa2:	not in enabled drivers build config
00:04:12.111  	mempool/octeontx:	not in enabled drivers build config
00:04:12.111  	mempool/stack:	not in enabled drivers build config
00:04:12.111  	dma/cnxk:	not in enabled drivers build config
00:04:12.111  	dma/dpaa:	not in enabled drivers build config
00:04:12.111  	dma/dpaa2:	not in enabled drivers build config
00:04:12.111  	dma/hisilicon:	not in enabled drivers build config
00:04:12.111  	dma/idxd:	not in enabled drivers build config
00:04:12.111  	dma/ioat:	not in enabled drivers build config
00:04:12.111  	dma/skeleton:	not in enabled drivers build config
00:04:12.111  	net/af_packet:	not in enabled drivers build config
00:04:12.111  	net/af_xdp:	not in enabled drivers build config
00:04:12.111  	net/ark:	not in enabled drivers build config
00:04:12.111  	net/atlantic:	not in enabled drivers build config
00:04:12.111  	net/avp:	not in enabled drivers build config
00:04:12.111  	net/axgbe:	not in enabled drivers build config
00:04:12.111  	net/bnx2x:	not in enabled drivers build config
00:04:12.111  	net/bnxt:	not in enabled drivers build config
00:04:12.111  	net/bonding:	not in enabled drivers build config
00:04:12.111  	net/cnxk:	not in enabled drivers build config
00:04:12.111  	net/cpfl:	not in enabled drivers build config
00:04:12.111  	net/cxgbe:	not in enabled drivers build config
00:04:12.111  	net/dpaa:	not in enabled drivers build config
00:04:12.111  	net/dpaa2:	not in enabled drivers build config
00:04:12.111  	net/e1000:	not in enabled drivers build config
00:04:12.111  	net/ena:	not in enabled drivers build config
00:04:12.111  	net/enetc:	not in enabled drivers build config
00:04:12.111  	net/enetfec:	not in enabled drivers build config
00:04:12.111  	net/enic:	not in enabled drivers build config
00:04:12.111  	net/failsafe:	not in enabled drivers build config
00:04:12.111  	net/fm10k:	not in enabled drivers build config
00:04:12.111  	net/gve:	not in enabled drivers build config
00:04:12.111  	net/hinic:	not in enabled drivers build config
00:04:12.111  	net/hns3:	not in enabled drivers build config
00:04:12.111  	net/i40e:	not in enabled drivers build config
00:04:12.111  	net/iavf:	not in enabled drivers build config
00:04:12.111  	net/ice:	not in enabled drivers build config
00:04:12.111  	net/idpf:	not in enabled drivers build config
00:04:12.111  	net/igc:	not in enabled drivers build config
00:04:12.111  	net/ionic:	not in enabled drivers build config
00:04:12.111  	net/ipn3ke:	not in enabled drivers build config
00:04:12.111  	net/ixgbe:	not in enabled drivers build config
00:04:12.111  	net/mana:	not in enabled drivers build config
00:04:12.111  	net/memif:	not in enabled drivers build config
00:04:12.111  	net/mlx4:	not in enabled drivers build config
00:04:12.111  	net/mlx5:	not in enabled drivers build config
00:04:12.111  	net/mvneta:	not in enabled drivers build config
00:04:12.111  	net/mvpp2:	not in enabled drivers build config
00:04:12.111  	net/netvsc:	not in enabled drivers build config
00:04:12.111  	net/nfb:	not in enabled drivers build config
00:04:12.111  	net/nfp:	not in enabled drivers build config
00:04:12.111  	net/ngbe:	not in enabled drivers build config
00:04:12.111  	net/null:	not in enabled drivers build config
00:04:12.111  	net/octeontx:	not in enabled drivers build config
00:04:12.111  	net/octeon_ep:	not in enabled drivers build config
00:04:12.111  	net/pcap:	not in enabled drivers build config
00:04:12.111  	net/pfe:	not in enabled drivers build config
00:04:12.111  	net/qede:	not in enabled drivers build config
00:04:12.111  	net/ring:	not in enabled drivers build config
00:04:12.111  	net/sfc:	not in enabled drivers build config
00:04:12.111  	net/softnic:	not in enabled drivers build config
00:04:12.111  	net/tap:	not in enabled drivers build config
00:04:12.111  	net/thunderx:	not in enabled drivers build config
00:04:12.111  	net/txgbe:	not in enabled drivers build config
00:04:12.111  	net/vdev_netvsc:	not in enabled drivers build config
00:04:12.111  	net/vhost:	not in enabled drivers build config
00:04:12.111  	net/virtio:	not in enabled drivers build config
00:04:12.111  	net/vmxnet3:	not in enabled drivers build config
00:04:12.111  	raw/*:	missing internal dependency, "rawdev"
00:04:12.111  	crypto/armv8:	not in enabled drivers build config
00:04:12.111  	crypto/bcmfs:	not in enabled drivers build config
00:04:12.111  	crypto/caam_jr:	not in enabled drivers build config
00:04:12.111  	crypto/ccp:	not in enabled drivers build config
00:04:12.111  	crypto/cnxk:	not in enabled drivers build config
00:04:12.111  	crypto/dpaa_sec:	not in enabled drivers build config
00:04:12.111  	crypto/dpaa2_sec:	not in enabled drivers build config
00:04:12.111  	crypto/ipsec_mb:	not in enabled drivers build config
00:04:12.111  	crypto/mlx5:	not in enabled drivers build config
00:04:12.111  	crypto/mvsam:	not in enabled drivers build config
00:04:12.111  	crypto/nitrox:	not in enabled drivers build config
00:04:12.111  	crypto/null:	not in enabled drivers build config
00:04:12.111  	crypto/octeontx:	not in enabled drivers build config
00:04:12.111  	crypto/openssl:	not in enabled drivers build config
00:04:12.111  	crypto/scheduler:	not in enabled drivers build config
00:04:12.111  	crypto/uadk:	not in enabled drivers build config
00:04:12.111  	crypto/virtio:	not in enabled drivers build config
00:04:12.111  	compress/isal:	not in enabled drivers build config
00:04:12.111  	compress/mlx5:	not in enabled drivers build config
00:04:12.111  	compress/nitrox:	not in enabled drivers build config
00:04:12.111  	compress/octeontx:	not in enabled drivers build config
00:04:12.111  	compress/zlib:	not in enabled drivers build config
00:04:12.111  	regex/*:	missing internal dependency, "regexdev"
00:04:12.111  	ml/*:	missing internal dependency, "mldev"
00:04:12.111  	vdpa/ifc:	not in enabled drivers build config
00:04:12.111  	vdpa/mlx5:	not in enabled drivers build config
00:04:12.111  	vdpa/nfp:	not in enabled drivers build config
00:04:12.111  	vdpa/sfc:	not in enabled drivers build config
00:04:12.111  	event/*:	missing internal dependency, "eventdev"
00:04:12.111  	baseband/*:	missing internal dependency, "bbdev"
00:04:12.111  	gpu/*:	missing internal dependency, "gpudev"
00:04:12.111  	
00:04:12.111  
00:04:12.111  Build targets in project: 85
00:04:12.111  
00:04:12.111  DPDK 24.03.0
00:04:12.111  
00:04:12.111    User defined options
00:04:12.111      buildtype          : debug
00:04:12.111      default_library    : shared
00:04:12.111      libdir             : lib
00:04:12.111      prefix             : /home/vagrant/spdk_repo/spdk/dpdk/build
00:04:12.111      b_sanitize         : address
00:04:12.111      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 
00:04:12.111      c_link_args        : 
00:04:12.111      cpu_instruction_set: native
00:04:12.111      disable_apps       : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test
00:04:12.111      disable_libs       : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table
00:04:12.111      enable_docs        : false
00:04:12.111      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm
00:04:12.111      enable_kmods       : false
00:04:12.111      max_lcores         : 128
00:04:12.111      tests              : false
00:04:12.111  
00:04:12.111  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:04:12.111  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp'
00:04:12.111  [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:04:12.111  [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:04:12.111  [3/268] Linking static target lib/librte_kvargs.a
00:04:12.111  [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o
00:04:12.111  [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:04:12.111  [6/268] Linking static target lib/librte_log.a
00:04:12.370  [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:04:12.629  [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:04:12.629  [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:04:12.629  [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:04:12.629  [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:04:12.629  [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:04:12.629  [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:04:12.629  [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:04:12.629  [15/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:04:12.629  [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:04:12.629  [17/268] Linking static target lib/librte_telemetry.a
00:04:12.888  [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:04:13.147  [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:04:13.147  [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:04:13.147  [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:04:13.147  [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:04:13.147  [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:04:13.406  [24/268] Linking target lib/librte_log.so.24.1
00:04:13.406  [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:04:13.406  [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:04:13.406  [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:04:13.406  [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:04:13.406  [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:04:13.406  [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:04:13.406  [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:04:13.664  [32/268] Linking target lib/librte_kvargs.so.24.1
00:04:13.664  [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:04:13.664  [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:04:13.664  [35/268] Linking target lib/librte_telemetry.so.24.1
00:04:13.664  [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:04:13.922  [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:04:13.922  [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:04:13.923  [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:04:13.923  [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:04:13.923  [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:04:13.923  [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:04:13.923  [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:04:13.923  [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:04:13.923  [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:04:14.180  [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:04:14.181  [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:04:14.181  [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:04:14.181  [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:04:14.440  [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:04:14.440  [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:04:14.440  [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:04:14.440  [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:04:14.699  [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:04:14.699  [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:04:14.699  [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:04:14.699  [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:04:14.699  [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:04:14.699  [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:04:14.958  [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:04:14.958  [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:04:14.958  [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:04:14.958  [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:04:14.958  [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:04:14.958  [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:04:15.273  [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:04:15.273  [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:04:15.273  [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:04:15.273  [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:04:15.554  [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:04:15.554  [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:04:15.554  [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:04:15.554  [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:04:15.554  [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:04:15.554  [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:04:15.813  [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:04:15.813  [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:04:15.813  [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:04:15.813  [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:04:15.813  [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:04:15.813  [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:04:15.813  [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:04:15.813  [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:04:16.074  [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:04:16.074  [85/268] Linking static target lib/librte_eal.a
00:04:16.074  [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:04:16.074  [87/268] Linking static target lib/librte_ring.a
00:04:16.334  [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:04:16.334  [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:04:16.334  [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:04:16.334  [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:04:16.334  [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:04:16.334  [93/268] Linking static target lib/librte_mempool.a
00:04:16.334  [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:04:16.592  [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:04:16.592  [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:04:16.592  [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a
00:04:16.592  [98/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:04:16.592  [99/268] Linking static target lib/librte_rcu.a
00:04:16.592  [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:04:16.851  [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:04:16.851  [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:04:16.851  [103/268] Linking static target lib/librte_mbuf.a
00:04:16.851  [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:04:16.851  [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:04:17.110  [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:04:17.110  [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:04:17.110  [108/268] Linking static target lib/librte_net.a
00:04:17.110  [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:04:17.110  [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:04:17.110  [111/268] Linking static target lib/librte_meter.a
00:04:17.369  [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:04:17.369  [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:04:17.369  [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:04:17.628  [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:04:17.628  [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:04:17.628  [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:04:17.628  [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:04:17.887  [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:04:17.887  [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:04:18.147  [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:04:18.147  [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:04:18.147  [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:04:18.405  [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:04:18.405  [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:04:18.405  [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:04:18.405  [127/268] Linking static target lib/librte_pci.a
00:04:18.406  [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:04:18.406  [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:04:18.664  [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:04:18.665  [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:04:18.665  [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:04:18.665  [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:04:18.665  [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:04:18.665  [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:04:18.665  [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:04:18.665  [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:04:18.665  [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:04:18.924  [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:04:18.924  [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:04:18.924  [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:04:18.924  [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:04:18.924  [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:04:18.924  [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:04:18.924  [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:04:18.924  [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:04:18.924  [147/268] Linking static target lib/librte_cmdline.a
00:04:19.184  [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:04:19.444  [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:04:19.444  [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:04:19.444  [151/268] Linking static target lib/librte_timer.a
00:04:19.444  [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:04:19.703  [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:04:19.703  [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:04:19.703  [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:04:19.703  [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:04:19.703  [157/268] Linking static target lib/librte_ethdev.a
00:04:19.963  [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:04:19.963  [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:04:19.963  [160/268] Linking static target lib/librte_compressdev.a
00:04:19.963  [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:04:19.963  [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:04:20.222  [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:04:20.222  [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:04:20.222  [165/268] Linking static target lib/librte_hash.a
00:04:20.481  [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:04:20.481  [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:04:20.481  [168/268] Linking static target lib/librte_dmadev.a
00:04:20.481  [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:04:20.481  [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:04:20.481  [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:04:20.481  [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:04:20.740  [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:04:21.000  [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:04:21.000  [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:21.000  [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:04:21.000  [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:04:21.000  [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:04:21.000  [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:04:21.000  [180/268] Linking static target lib/librte_cryptodev.a
00:04:21.259  [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:04:21.259  [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:04:21.259  [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:21.259  [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:04:21.518  [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:04:21.518  [186/268] Linking static target lib/librte_power.a
00:04:21.778  [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:04:21.778  [188/268] Linking static target lib/librte_reorder.a
00:04:21.778  [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:04:21.778  [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:04:21.778  [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:04:21.778  [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:04:21.778  [193/268] Linking static target lib/librte_security.a
00:04:22.346  [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:04:22.346  [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:04:22.606  [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:04:22.606  [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:04:22.606  [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:04:22.606  [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:04:22.865  [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:04:22.865  [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:04:23.124  [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:04:23.124  [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:04:23.124  [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:04:23.124  [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:04:23.124  [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:04:23.383  [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:04:23.383  [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a
00:04:23.383  [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:04:23.383  [210/268] Linking static target drivers/libtmp_rte_bus_pci.a
00:04:23.383  [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:23.643  [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:04:23.643  [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:04:23.643  [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:04:23.643  [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:04:23.643  [216/268] Linking static target drivers/librte_bus_vdev.a
00:04:23.643  [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:04:23.643  [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:04:23.643  [219/268] Linking static target drivers/librte_bus_pci.a
00:04:23.643  [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:04:23.643  [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a
00:04:23.909  [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:04:23.909  [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:04:23.909  [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:04:23.909  [225/268] Linking static target drivers/librte_mempool_ring.a
00:04:23.909  [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:24.180  [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:04:24.749  [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:04:28.939  [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:04:28.939  [230/268] Linking static target lib/librte_vhost.a
00:04:28.939  [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:04:28.939  [232/268] Linking target lib/librte_eal.so.24.1
00:04:28.939  [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:04:28.939  [234/268] Linking target lib/librte_pci.so.24.1
00:04:28.939  [235/268] Linking target lib/librte_ring.so.24.1
00:04:28.939  [236/268] Linking target lib/librte_meter.so.24.1
00:04:28.939  [237/268] Linking target lib/librte_dmadev.so.24.1
00:04:28.939  [238/268] Linking target lib/librte_timer.so.24.1
00:04:28.939  [239/268] Linking target drivers/librte_bus_vdev.so.24.1
00:04:28.939  [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:04:28.939  [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:04:28.939  [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:04:28.939  [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:04:28.939  [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:04:28.939  [245/268] Linking target lib/librte_mempool.so.24.1
00:04:28.939  [246/268] Linking target lib/librte_rcu.so.24.1
00:04:28.939  [247/268] Linking target drivers/librte_bus_pci.so.24.1
00:04:28.939  [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:04:28.939  [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:04:28.939  [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:04:28.939  [251/268] Linking target lib/librte_mbuf.so.24.1
00:04:28.939  [252/268] Linking target drivers/librte_mempool_ring.so.24.1
00:04:29.198  [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:04:29.198  [254/268] Linking target lib/librte_reorder.so.24.1
00:04:29.198  [255/268] Linking target lib/librte_net.so.24.1
00:04:29.198  [256/268] Linking target lib/librte_cryptodev.so.24.1
00:04:29.198  [257/268] Linking target lib/librte_compressdev.so.24.1
00:04:29.198  [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:04:29.198  [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:04:29.458  [260/268] Linking target lib/librte_security.so.24.1
00:04:29.458  [261/268] Linking target lib/librte_cmdline.so.24.1
00:04:29.458  [262/268] Linking target lib/librte_hash.so.24.1
00:04:29.458  [263/268] Linking target lib/librte_ethdev.so.24.1
00:04:29.458  [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:04:29.458  [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:04:29.458  [266/268] Linking target lib/librte_power.so.24.1
00:04:30.396  [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:04:30.396  [268/268] Linking target lib/librte_vhost.so.24.1
00:04:30.396  INFO: autodetecting backend as ninja
00:04:30.396  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10
00:04:48.492    CC lib/log/log_flags.o
00:04:48.492    CC lib/log/log.o
00:04:48.492    CC lib/log/log_deprecated.o
00:04:48.492    CC lib/ut_mock/mock.o
00:04:48.492    CC lib/ut/ut.o
00:04:48.492    LIB libspdk_ut.a
00:04:48.492    LIB libspdk_ut_mock.a
00:04:48.492    LIB libspdk_log.a
00:04:48.492    SO libspdk_ut.so.2.0
00:04:48.492    SO libspdk_ut_mock.so.6.0
00:04:48.492    SO libspdk_log.so.7.1
00:04:48.492    SYMLINK libspdk_ut.so
00:04:48.492    SYMLINK libspdk_ut_mock.so
00:04:48.492    SYMLINK libspdk_log.so
00:04:48.751    CC lib/ioat/ioat.o
00:04:48.751    CC lib/dma/dma.o
00:04:48.751    CC lib/util/base64.o
00:04:48.751    CC lib/util/bit_array.o
00:04:48.751    CC lib/util/cpuset.o
00:04:48.751    CC lib/util/crc16.o
00:04:48.751    CC lib/util/crc32.o
00:04:48.751    CC lib/util/crc32c.o
00:04:48.751    CXX lib/trace_parser/trace.o
00:04:48.751    CC lib/vfio_user/host/vfio_user_pci.o
00:04:48.751    CC lib/util/crc32_ieee.o
00:04:48.751    CC lib/util/crc64.o
00:04:48.751    CC lib/util/dif.o
00:04:49.010    CC lib/vfio_user/host/vfio_user.o
00:04:49.010    LIB libspdk_dma.a
00:04:49.010    CC lib/util/fd.o
00:04:49.010    SO libspdk_dma.so.5.0
00:04:49.010    CC lib/util/fd_group.o
00:04:49.010    CC lib/util/file.o
00:04:49.010    CC lib/util/hexlify.o
00:04:49.010    LIB libspdk_ioat.a
00:04:49.010    SYMLINK libspdk_dma.so
00:04:49.010    CC lib/util/iov.o
00:04:49.010    SO libspdk_ioat.so.7.0
00:04:49.010    CC lib/util/math.o
00:04:49.010    SYMLINK libspdk_ioat.so
00:04:49.010    CC lib/util/net.o
00:04:49.010    CC lib/util/pipe.o
00:04:49.010    LIB libspdk_vfio_user.a
00:04:49.011    CC lib/util/strerror_tls.o
00:04:49.011    CC lib/util/string.o
00:04:49.011    SO libspdk_vfio_user.so.5.0
00:04:49.300    SYMLINK libspdk_vfio_user.so
00:04:49.300    CC lib/util/uuid.o
00:04:49.300    CC lib/util/xor.o
00:04:49.300    CC lib/util/zipf.o
00:04:49.300    CC lib/util/md5.o
00:04:49.560    LIB libspdk_util.a
00:04:49.560    SO libspdk_util.so.10.1
00:04:49.819    LIB libspdk_trace_parser.a
00:04:49.819    SO libspdk_trace_parser.so.6.0
00:04:49.819    SYMLINK libspdk_util.so
00:04:49.819    SYMLINK libspdk_trace_parser.so
00:04:50.077    CC lib/idxd/idxd.o
00:04:50.077    CC lib/idxd/idxd_user.o
00:04:50.077    CC lib/idxd/idxd_kernel.o
00:04:50.077    CC lib/env_dpdk/env.o
00:04:50.077    CC lib/env_dpdk/memory.o
00:04:50.077    CC lib/env_dpdk/pci.o
00:04:50.077    CC lib/rdma_utils/rdma_utils.o
00:04:50.077    CC lib/json/json_parse.o
00:04:50.077    CC lib/conf/conf.o
00:04:50.077    CC lib/vmd/vmd.o
00:04:50.077    CC lib/vmd/led.o
00:04:50.336    LIB libspdk_conf.a
00:04:50.336    CC lib/json/json_util.o
00:04:50.336    CC lib/json/json_write.o
00:04:50.336    SO libspdk_conf.so.6.0
00:04:50.336    LIB libspdk_rdma_utils.a
00:04:50.336    SO libspdk_rdma_utils.so.1.0
00:04:50.336    CC lib/env_dpdk/init.o
00:04:50.336    SYMLINK libspdk_conf.so
00:04:50.336    CC lib/env_dpdk/threads.o
00:04:50.336    SYMLINK libspdk_rdma_utils.so
00:04:50.336    CC lib/env_dpdk/pci_ioat.o
00:04:50.336    CC lib/env_dpdk/pci_virtio.o
00:04:50.594    CC lib/env_dpdk/pci_vmd.o
00:04:50.594    CC lib/env_dpdk/pci_idxd.o
00:04:50.594    CC lib/env_dpdk/pci_event.o
00:04:50.594    LIB libspdk_json.a
00:04:50.594    SO libspdk_json.so.6.0
00:04:50.594    CC lib/env_dpdk/sigbus_handler.o
00:04:50.594    CC lib/env_dpdk/pci_dpdk.o
00:04:50.594    LIB libspdk_idxd.a
00:04:50.594    SYMLINK libspdk_json.so
00:04:50.594    CC lib/env_dpdk/pci_dpdk_2207.o
00:04:50.594    CC lib/env_dpdk/pci_dpdk_2211.o
00:04:50.594    CC lib/rdma_provider/common.o
00:04:50.594    CC lib/rdma_provider/rdma_provider_verbs.o
00:04:50.594    SO libspdk_idxd.so.12.1
00:04:50.852    LIB libspdk_vmd.a
00:04:50.852    SO libspdk_vmd.so.6.0
00:04:50.852    SYMLINK libspdk_idxd.so
00:04:50.852    SYMLINK libspdk_vmd.so
00:04:50.852    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:04:50.852    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:04:50.852    CC lib/jsonrpc/jsonrpc_server.o
00:04:50.852    CC lib/jsonrpc/jsonrpc_client.o
00:04:50.852    LIB libspdk_rdma_provider.a
00:04:50.852    SO libspdk_rdma_provider.so.7.0
00:04:51.110    SYMLINK libspdk_rdma_provider.so
00:04:51.110    LIB libspdk_jsonrpc.a
00:04:51.110    SO libspdk_jsonrpc.so.6.0
00:04:51.369    SYMLINK libspdk_jsonrpc.so
00:04:51.627    LIB libspdk_env_dpdk.a
00:04:51.627    SO libspdk_env_dpdk.so.15.1
00:04:51.886    CC lib/rpc/rpc.o
00:04:51.886    SYMLINK libspdk_env_dpdk.so
00:04:51.886    LIB libspdk_rpc.a
00:04:52.144    SO libspdk_rpc.so.6.0
00:04:52.144    SYMLINK libspdk_rpc.so
00:04:52.403    CC lib/trace/trace.o
00:04:52.403    CC lib/trace/trace_rpc.o
00:04:52.403    CC lib/trace/trace_flags.o
00:04:52.403    CC lib/keyring/keyring.o
00:04:52.403    CC lib/keyring/keyring_rpc.o
00:04:52.403    CC lib/notify/notify.o
00:04:52.403    CC lib/notify/notify_rpc.o
00:04:52.662    LIB libspdk_notify.a
00:04:52.662    SO libspdk_notify.so.6.0
00:04:52.662    LIB libspdk_keyring.a
00:04:52.662    LIB libspdk_trace.a
00:04:52.921    SYMLINK libspdk_notify.so
00:04:52.921    SO libspdk_keyring.so.2.0
00:04:52.921    SO libspdk_trace.so.11.0
00:04:52.921    SYMLINK libspdk_trace.so
00:04:52.921    SYMLINK libspdk_keyring.so
00:04:53.488    CC lib/thread/thread.o
00:04:53.488    CC lib/thread/iobuf.o
00:04:53.488    CC lib/sock/sock.o
00:04:53.488    CC lib/sock/sock_rpc.o
00:04:53.746    LIB libspdk_sock.a
00:04:53.746    SO libspdk_sock.so.10.0
00:04:54.005    SYMLINK libspdk_sock.so
00:04:54.263    CC lib/nvme/nvme_ctrlr_cmd.o
00:04:54.263    CC lib/nvme/nvme_ctrlr.o
00:04:54.263    CC lib/nvme/nvme_fabric.o
00:04:54.263    CC lib/nvme/nvme_ns_cmd.o
00:04:54.263    CC lib/nvme/nvme_ns.o
00:04:54.263    CC lib/nvme/nvme_pcie_common.o
00:04:54.263    CC lib/nvme/nvme_qpair.o
00:04:54.263    CC lib/nvme/nvme_pcie.o
00:04:54.263    CC lib/nvme/nvme.o
00:04:55.200    CC lib/nvme/nvme_quirks.o
00:04:55.200    LIB libspdk_thread.a
00:04:55.200    CC lib/nvme/nvme_transport.o
00:04:55.200    SO libspdk_thread.so.11.0
00:04:55.200    CC lib/nvme/nvme_discovery.o
00:04:55.200    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:04:55.200    SYMLINK libspdk_thread.so
00:04:55.200    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:04:55.200    CC lib/nvme/nvme_tcp.o
00:04:55.200    CC lib/nvme/nvme_opal.o
00:04:55.200    CC lib/nvme/nvme_io_msg.o
00:04:55.459    CC lib/nvme/nvme_poll_group.o
00:04:55.459    CC lib/nvme/nvme_zns.o
00:04:55.718    CC lib/nvme/nvme_stubs.o
00:04:55.718    CC lib/nvme/nvme_auth.o
00:04:55.718    CC lib/accel/accel.o
00:04:55.718    CC lib/blob/blobstore.o
00:04:55.718    CC lib/nvme/nvme_cuse.o
00:04:55.977    CC lib/init/json_config.o
00:04:56.236    CC lib/virtio/virtio.o
00:04:56.236    CC lib/blob/request.o
00:04:56.236    CC lib/fsdev/fsdev.o
00:04:56.236    CC lib/init/subsystem.o
00:04:56.236    CC lib/init/subsystem_rpc.o
00:04:56.495    CC lib/virtio/virtio_vhost_user.o
00:04:56.495    CC lib/fsdev/fsdev_io.o
00:04:56.495    CC lib/init/rpc.o
00:04:56.495    CC lib/fsdev/fsdev_rpc.o
00:04:56.495    CC lib/nvme/nvme_rdma.o
00:04:56.754    LIB libspdk_init.a
00:04:56.754    CC lib/blob/zeroes.o
00:04:56.754    CC lib/blob/blob_bs_dev.o
00:04:56.754    CC lib/virtio/virtio_vfio_user.o
00:04:56.754    SO libspdk_init.so.6.0
00:04:56.754    CC lib/accel/accel_rpc.o
00:04:56.754    SYMLINK libspdk_init.so
00:04:56.754    CC lib/accel/accel_sw.o
00:04:56.754    CC lib/virtio/virtio_pci.o
00:04:56.754    LIB libspdk_fsdev.a
00:04:56.754    SO libspdk_fsdev.so.2.0
00:04:57.013    SYMLINK libspdk_fsdev.so
00:04:57.013    CC lib/event/app.o
00:04:57.013    CC lib/event/reactor.o
00:04:57.013    CC lib/event/app_rpc.o
00:04:57.013    CC lib/event/log_rpc.o
00:04:57.013    CC lib/event/scheduler_static.o
00:04:57.013    LIB libspdk_virtio.a
00:04:57.013    LIB libspdk_accel.a
00:04:57.276    SO libspdk_virtio.so.7.0
00:04:57.276    SO libspdk_accel.so.16.0
00:04:57.276    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:04:57.276    SYMLINK libspdk_virtio.so
00:04:57.276    SYMLINK libspdk_accel.so
00:04:57.544    LIB libspdk_event.a
00:04:57.544    SO libspdk_event.so.14.0
00:04:57.544    CC lib/bdev/bdev_rpc.o
00:04:57.544    CC lib/bdev/bdev_zone.o
00:04:57.544    CC lib/bdev/bdev.o
00:04:57.544    CC lib/bdev/part.o
00:04:57.544    CC lib/bdev/scsi_nvme.o
00:04:57.803    SYMLINK libspdk_event.so
00:04:57.803    LIB libspdk_fuse_dispatcher.a
00:04:58.062    SO libspdk_fuse_dispatcher.so.1.0
00:04:58.062    LIB libspdk_nvme.a
00:04:58.062    SYMLINK libspdk_fuse_dispatcher.so
00:04:58.321    SO libspdk_nvme.so.15.0
00:04:58.581    SYMLINK libspdk_nvme.so
00:04:59.149    LIB libspdk_blob.a
00:04:59.408    SO libspdk_blob.so.12.0
00:04:59.408    SYMLINK libspdk_blob.so
00:04:59.974    CC lib/blobfs/blobfs.o
00:04:59.974    CC lib/blobfs/tree.o
00:04:59.974    CC lib/lvol/lvol.o
00:05:00.910    LIB libspdk_blobfs.a
00:05:00.910    LIB libspdk_bdev.a
00:05:00.910    SO libspdk_blobfs.so.11.0
00:05:00.910    SYMLINK libspdk_blobfs.so
00:05:00.910    LIB libspdk_lvol.a
00:05:00.910    SO libspdk_bdev.so.17.0
00:05:00.910    SO libspdk_lvol.so.11.0
00:05:00.910    SYMLINK libspdk_bdev.so
00:05:00.910    SYMLINK libspdk_lvol.so
00:05:01.169    CC lib/ftl/ftl_io.o
00:05:01.169    CC lib/ftl/ftl_init.o
00:05:01.169    CC lib/ftl/ftl_sb.o
00:05:01.169    CC lib/ftl/ftl_core.o
00:05:01.169    CC lib/ftl/ftl_layout.o
00:05:01.169    CC lib/ftl/ftl_debug.o
00:05:01.428    CC lib/nvmf/ctrlr.o
00:05:01.428    CC lib/ublk/ublk.o
00:05:01.428    CC lib/scsi/dev.o
00:05:01.428    CC lib/nbd/nbd.o
00:05:01.428    CC lib/ftl/ftl_l2p.o
00:05:01.428    CC lib/ftl/ftl_l2p_flat.o
00:05:01.428    CC lib/scsi/lun.o
00:05:01.428    CC lib/nbd/nbd_rpc.o
00:05:01.428    CC lib/scsi/port.o
00:05:01.687    CC lib/ftl/ftl_nv_cache.o
00:05:01.687    CC lib/nvmf/ctrlr_discovery.o
00:05:01.687    CC lib/scsi/scsi.o
00:05:01.687    CC lib/ftl/ftl_band.o
00:05:01.687    CC lib/ftl/ftl_band_ops.o
00:05:01.687    CC lib/nvmf/ctrlr_bdev.o
00:05:01.687    LIB libspdk_nbd.a
00:05:01.687    SO libspdk_nbd.so.7.0
00:05:01.946    CC lib/scsi/scsi_bdev.o
00:05:01.946    SYMLINK libspdk_nbd.so
00:05:01.946    CC lib/nvmf/subsystem.o
00:05:01.946    CC lib/ftl/ftl_writer.o
00:05:01.946    CC lib/ublk/ublk_rpc.o
00:05:01.946    CC lib/ftl/ftl_rq.o
00:05:02.204    CC lib/scsi/scsi_pr.o
00:05:02.204    CC lib/nvmf/nvmf.o
00:05:02.204    CC lib/nvmf/nvmf_rpc.o
00:05:02.204    LIB libspdk_ublk.a
00:05:02.204    CC lib/scsi/scsi_rpc.o
00:05:02.204    SO libspdk_ublk.so.3.0
00:05:02.204    SYMLINK libspdk_ublk.so
00:05:02.204    CC lib/nvmf/transport.o
00:05:02.463    CC lib/nvmf/tcp.o
00:05:02.463    CC lib/nvmf/stubs.o
00:05:02.463    CC lib/nvmf/mdns_server.o
00:05:02.463    CC lib/scsi/task.o
00:05:02.722    CC lib/ftl/ftl_reloc.o
00:05:02.722    LIB libspdk_scsi.a
00:05:02.722    SO libspdk_scsi.so.9.0
00:05:02.722    CC lib/nvmf/rdma.o
00:05:02.980    SYMLINK libspdk_scsi.so
00:05:02.980    CC lib/nvmf/auth.o
00:05:02.980    CC lib/ftl/ftl_l2p_cache.o
00:05:03.238    CC lib/ftl/ftl_p2l.o
00:05:03.238    CC lib/ftl/ftl_p2l_log.o
00:05:03.238    CC lib/iscsi/conn.o
00:05:03.238    CC lib/vhost/vhost.o
00:05:03.238    CC lib/ftl/mngt/ftl_mngt.o
00:05:03.238    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:05:03.497    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:05:03.497    CC lib/vhost/vhost_rpc.o
00:05:03.497    CC lib/vhost/vhost_scsi.o
00:05:03.497    CC lib/vhost/vhost_blk.o
00:05:03.497    CC lib/iscsi/init_grp.o
00:05:03.755    CC lib/ftl/mngt/ftl_mngt_startup.o
00:05:03.755    CC lib/ftl/mngt/ftl_mngt_md.o
00:05:03.755    CC lib/iscsi/iscsi.o
00:05:03.755    CC lib/iscsi/param.o
00:05:03.755    CC lib/vhost/rte_vhost_user.o
00:05:04.013    CC lib/iscsi/portal_grp.o
00:05:04.014    CC lib/ftl/mngt/ftl_mngt_misc.o
00:05:04.014    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:05:04.014    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:05:04.273    CC lib/iscsi/tgt_node.o
00:05:04.273    CC lib/ftl/mngt/ftl_mngt_band.o
00:05:04.273    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:05:04.273    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:05:04.273    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:05:04.531    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:05:04.531    CC lib/ftl/utils/ftl_conf.o
00:05:04.531    CC lib/ftl/utils/ftl_md.o
00:05:04.531    CC lib/ftl/utils/ftl_mempool.o
00:05:04.531    CC lib/iscsi/iscsi_subsystem.o
00:05:04.789    CC lib/iscsi/iscsi_rpc.o
00:05:04.789    CC lib/ftl/utils/ftl_bitmap.o
00:05:04.789    CC lib/ftl/utils/ftl_property.o
00:05:04.789    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:05:04.789    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:05:05.048    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:05:05.048    LIB libspdk_vhost.a
00:05:05.048    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:05:05.048    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:05:05.048    SO libspdk_vhost.so.8.0
00:05:05.048    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:05:05.048    CC lib/iscsi/task.o
00:05:05.048    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:05:05.048    CC lib/ftl/upgrade/ftl_sb_v3.o
00:05:05.048    CC lib/ftl/upgrade/ftl_sb_v5.o
00:05:05.309    SYMLINK libspdk_vhost.so
00:05:05.309    CC lib/ftl/nvc/ftl_nvc_dev.o
00:05:05.309    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:05:05.309    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:05:05.309    LIB libspdk_nvmf.a
00:05:05.309    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:05:05.309    CC lib/ftl/base/ftl_base_dev.o
00:05:05.309    CC lib/ftl/base/ftl_base_bdev.o
00:05:05.309    LIB libspdk_iscsi.a
00:05:05.309    CC lib/ftl/ftl_trace.o
00:05:05.309    SO libspdk_nvmf.so.20.0
00:05:05.309    SO libspdk_iscsi.so.8.0
00:05:05.571    LIB libspdk_ftl.a
00:05:05.571    SYMLINK libspdk_iscsi.so
00:05:05.571    SYMLINK libspdk_nvmf.so
00:05:05.829    SO libspdk_ftl.so.9.0
00:05:06.087    SYMLINK libspdk_ftl.so
00:05:06.653    CC module/env_dpdk/env_dpdk_rpc.o
00:05:06.653    CC module/fsdev/aio/fsdev_aio.o
00:05:06.653    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:05:06.653    CC module/keyring/file/keyring.o
00:05:06.653    CC module/keyring/linux/keyring.o
00:05:06.653    CC module/scheduler/gscheduler/gscheduler.o
00:05:06.653    CC module/scheduler/dynamic/scheduler_dynamic.o
00:05:06.653    CC module/blob/bdev/blob_bdev.o
00:05:06.653    CC module/sock/posix/posix.o
00:05:06.653    CC module/accel/error/accel_error.o
00:05:06.653    LIB libspdk_env_dpdk_rpc.a
00:05:06.912    SO libspdk_env_dpdk_rpc.so.6.0
00:05:06.912    SYMLINK libspdk_env_dpdk_rpc.so
00:05:06.912    CC module/keyring/linux/keyring_rpc.o
00:05:06.912    CC module/keyring/file/keyring_rpc.o
00:05:06.912    LIB libspdk_scheduler_gscheduler.a
00:05:06.912    CC module/accel/error/accel_error_rpc.o
00:05:06.912    LIB libspdk_scheduler_dpdk_governor.a
00:05:06.912    SO libspdk_scheduler_gscheduler.so.4.0
00:05:06.912    SO libspdk_scheduler_dpdk_governor.so.4.0
00:05:06.912    LIB libspdk_scheduler_dynamic.a
00:05:06.912    SYMLINK libspdk_scheduler_gscheduler.so
00:05:06.912    SO libspdk_scheduler_dynamic.so.4.0
00:05:06.912    LIB libspdk_keyring_linux.a
00:05:06.912    CC module/fsdev/aio/fsdev_aio_rpc.o
00:05:06.912    SYMLINK libspdk_scheduler_dpdk_governor.so
00:05:06.912    CC module/fsdev/aio/linux_aio_mgr.o
00:05:06.912    LIB libspdk_blob_bdev.a
00:05:06.912    SO libspdk_keyring_linux.so.1.0
00:05:06.912    SYMLINK libspdk_scheduler_dynamic.so
00:05:06.912    LIB libspdk_keyring_file.a
00:05:06.912    LIB libspdk_accel_error.a
00:05:06.912    SO libspdk_blob_bdev.so.12.0
00:05:07.171    SYMLINK libspdk_keyring_linux.so
00:05:07.171    SO libspdk_accel_error.so.2.0
00:05:07.171    SO libspdk_keyring_file.so.2.0
00:05:07.171    SYMLINK libspdk_blob_bdev.so
00:05:07.171    SYMLINK libspdk_accel_error.so
00:05:07.171    SYMLINK libspdk_keyring_file.so
00:05:07.171    CC module/accel/ioat/accel_ioat.o
00:05:07.171    CC module/accel/ioat/accel_ioat_rpc.o
00:05:07.171    CC module/accel/dsa/accel_dsa.o
00:05:07.430    CC module/accel/iaa/accel_iaa.o
00:05:07.430    CC module/accel/dsa/accel_dsa_rpc.o
00:05:07.430    LIB libspdk_accel_ioat.a
00:05:07.430    CC module/bdev/error/vbdev_error.o
00:05:07.430    CC module/bdev/delay/vbdev_delay.o
00:05:07.430    CC module/bdev/gpt/gpt.o
00:05:07.430    CC module/blobfs/bdev/blobfs_bdev.o
00:05:07.430    SO libspdk_accel_ioat.so.6.0
00:05:07.430    LIB libspdk_fsdev_aio.a
00:05:07.430    SO libspdk_fsdev_aio.so.1.0
00:05:07.430    SYMLINK libspdk_accel_ioat.so
00:05:07.430    CC module/bdev/delay/vbdev_delay_rpc.o
00:05:07.430    CC module/accel/iaa/accel_iaa_rpc.o
00:05:07.430    LIB libspdk_accel_dsa.a
00:05:08.056    SO libspdk_accel_dsa.so.5.0
00:05:08.056    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:05:08.056    SYMLINK libspdk_fsdev_aio.so
00:05:08.056    LIB libspdk_sock_posix.a
00:05:08.056    CC module/bdev/gpt/vbdev_gpt.o
00:05:08.056    CC module/bdev/error/vbdev_error_rpc.o
00:05:08.056    SYMLINK libspdk_accel_dsa.so
00:05:08.056    SO libspdk_sock_posix.so.6.0
00:05:08.056    LIB libspdk_accel_iaa.a
00:05:08.056    SO libspdk_accel_iaa.so.3.0
00:05:08.056    SYMLINK libspdk_sock_posix.so
00:05:08.056    CC module/bdev/lvol/vbdev_lvol.o
00:05:08.056    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:05:08.056    SYMLINK libspdk_accel_iaa.so
00:05:08.056    LIB libspdk_bdev_delay.a
00:05:08.056    LIB libspdk_bdev_error.a
00:05:08.056    LIB libspdk_blobfs_bdev.a
00:05:08.056    SO libspdk_bdev_error.so.6.0
00:05:08.056    SO libspdk_bdev_delay.so.6.0
00:05:08.056    CC module/bdev/malloc/bdev_malloc.o
00:05:08.056    SO libspdk_blobfs_bdev.so.6.0
00:05:08.056    CC module/bdev/nvme/bdev_nvme.o
00:05:08.056    CC module/bdev/null/bdev_null.o
00:05:08.056    LIB libspdk_bdev_gpt.a
00:05:08.056    SYMLINK libspdk_bdev_error.so
00:05:08.056    SYMLINK libspdk_bdev_delay.so
00:05:08.056    CC module/bdev/nvme/bdev_nvme_rpc.o
00:05:08.056    SYMLINK libspdk_blobfs_bdev.so
00:05:08.056    CC module/bdev/malloc/bdev_malloc_rpc.o
00:05:08.056    SO libspdk_bdev_gpt.so.6.0
00:05:08.056    CC module/bdev/passthru/vbdev_passthru.o
00:05:08.056    SYMLINK libspdk_bdev_gpt.so
00:05:08.056    CC module/bdev/nvme/nvme_rpc.o
00:05:08.056    CC module/bdev/raid/bdev_raid.o
00:05:08.056    CC module/bdev/raid/bdev_raid_rpc.o
00:05:08.056    CC module/bdev/raid/bdev_raid_sb.o
00:05:08.056    CC module/bdev/null/bdev_null_rpc.o
00:05:08.316    LIB libspdk_bdev_malloc.a
00:05:08.316    CC module/bdev/raid/raid0.o
00:05:08.316    LIB libspdk_bdev_lvol.a
00:05:08.316    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:05:08.316    SO libspdk_bdev_malloc.so.6.0
00:05:08.316    SO libspdk_bdev_lvol.so.6.0
00:05:08.316    CC module/bdev/raid/raid1.o
00:05:08.316    SYMLINK libspdk_bdev_malloc.so
00:05:08.316    CC module/bdev/raid/concat.o
00:05:08.316    LIB libspdk_bdev_null.a
00:05:08.316    SYMLINK libspdk_bdev_lvol.so
00:05:08.316    CC module/bdev/nvme/bdev_mdns_client.o
00:05:08.316    SO libspdk_bdev_null.so.6.0
00:05:08.316    LIB libspdk_bdev_passthru.a
00:05:08.575    SO libspdk_bdev_passthru.so.6.0
00:05:08.575    SYMLINK libspdk_bdev_null.so
00:05:08.575    CC module/bdev/split/vbdev_split.o
00:05:08.575    SYMLINK libspdk_bdev_passthru.so
00:05:08.575    CC module/bdev/split/vbdev_split_rpc.o
00:05:08.575    CC module/bdev/zone_block/vbdev_zone_block.o
00:05:08.835    CC module/bdev/nvme/vbdev_opal.o
00:05:08.835    CC module/bdev/xnvme/bdev_xnvme.o
00:05:08.835    CC module/bdev/aio/bdev_aio.o
00:05:08.835    LIB libspdk_bdev_split.a
00:05:08.835    CC module/bdev/virtio/bdev_virtio_scsi.o
00:05:08.835    SO libspdk_bdev_split.so.6.0
00:05:08.835    CC module/bdev/ftl/bdev_ftl.o
00:05:08.835    CC module/bdev/iscsi/bdev_iscsi.o
00:05:08.835    SYMLINK libspdk_bdev_split.so
00:05:08.835    CC module/bdev/ftl/bdev_ftl_rpc.o
00:05:09.094    CC module/bdev/xnvme/bdev_xnvme_rpc.o
00:05:09.094    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:05:09.094    CC module/bdev/nvme/vbdev_opal_rpc.o
00:05:09.094    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:05:09.094    LIB libspdk_bdev_ftl.a
00:05:09.094    SO libspdk_bdev_ftl.so.6.0
00:05:09.094    CC module/bdev/aio/bdev_aio_rpc.o
00:05:09.094    LIB libspdk_bdev_xnvme.a
00:05:09.094    LIB libspdk_bdev_zone_block.a
00:05:09.094    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:05:09.094    SO libspdk_bdev_xnvme.so.3.0
00:05:09.094    SYMLINK libspdk_bdev_ftl.so
00:05:09.094    LIB libspdk_bdev_raid.a
00:05:09.094    CC module/bdev/virtio/bdev_virtio_blk.o
00:05:09.094    SO libspdk_bdev_zone_block.so.6.0
00:05:09.354    CC module/bdev/virtio/bdev_virtio_rpc.o
00:05:09.354    SYMLINK libspdk_bdev_xnvme.so
00:05:09.354    SO libspdk_bdev_raid.so.6.0
00:05:09.354    SYMLINK libspdk_bdev_zone_block.so
00:05:09.354    LIB libspdk_bdev_aio.a
00:05:09.354    SYMLINK libspdk_bdev_raid.so
00:05:09.354    SO libspdk_bdev_aio.so.6.0
00:05:09.354    LIB libspdk_bdev_iscsi.a
00:05:09.354    SO libspdk_bdev_iscsi.so.6.0
00:05:09.354    SYMLINK libspdk_bdev_aio.so
00:05:09.612    SYMLINK libspdk_bdev_iscsi.so
00:05:09.612    LIB libspdk_bdev_virtio.a
00:05:09.612    SO libspdk_bdev_virtio.so.6.0
00:05:09.612    SYMLINK libspdk_bdev_virtio.so
00:05:10.991    LIB libspdk_bdev_nvme.a
00:05:10.991    SO libspdk_bdev_nvme.so.7.1
00:05:10.991    SYMLINK libspdk_bdev_nvme.so
00:05:11.929    CC module/event/subsystems/scheduler/scheduler.o
00:05:11.929    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:05:11.929    CC module/event/subsystems/iobuf/iobuf.o
00:05:11.929    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:05:11.929    CC module/event/subsystems/sock/sock.o
00:05:11.929    CC module/event/subsystems/fsdev/fsdev.o
00:05:11.929    CC module/event/subsystems/vmd/vmd.o
00:05:11.929    CC module/event/subsystems/vmd/vmd_rpc.o
00:05:11.929    CC module/event/subsystems/keyring/keyring.o
00:05:11.929    LIB libspdk_event_vhost_blk.a
00:05:11.929    LIB libspdk_event_sock.a
00:05:11.929    LIB libspdk_event_keyring.a
00:05:11.929    LIB libspdk_event_scheduler.a
00:05:11.929    LIB libspdk_event_iobuf.a
00:05:11.929    LIB libspdk_event_vmd.a
00:05:11.929    SO libspdk_event_vhost_blk.so.3.0
00:05:11.929    LIB libspdk_event_fsdev.a
00:05:11.929    SO libspdk_event_keyring.so.1.0
00:05:11.929    SO libspdk_event_sock.so.5.0
00:05:11.929    SO libspdk_event_scheduler.so.4.0
00:05:11.929    SO libspdk_event_iobuf.so.3.0
00:05:11.929    SO libspdk_event_fsdev.so.1.0
00:05:11.929    SO libspdk_event_vmd.so.6.0
00:05:11.929    SYMLINK libspdk_event_keyring.so
00:05:11.929    SYMLINK libspdk_event_vhost_blk.so
00:05:11.929    SYMLINK libspdk_event_scheduler.so
00:05:11.929    SYMLINK libspdk_event_sock.so
00:05:11.929    SYMLINK libspdk_event_fsdev.so
00:05:11.929    SYMLINK libspdk_event_iobuf.so
00:05:11.929    SYMLINK libspdk_event_vmd.so
00:05:12.497    CC module/event/subsystems/accel/accel.o
00:05:12.497    LIB libspdk_event_accel.a
00:05:12.497    SO libspdk_event_accel.so.6.0
00:05:12.756    SYMLINK libspdk_event_accel.so
00:05:13.014    CC module/event/subsystems/bdev/bdev.o
00:05:13.272    LIB libspdk_event_bdev.a
00:05:13.272    SO libspdk_event_bdev.so.6.0
00:05:13.272    SYMLINK libspdk_event_bdev.so
00:05:13.841    CC module/event/subsystems/ublk/ublk.o
00:05:13.841    CC module/event/subsystems/scsi/scsi.o
00:05:13.841    CC module/event/subsystems/nbd/nbd.o
00:05:13.841    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:05:13.841    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:05:13.841    LIB libspdk_event_ublk.a
00:05:13.841    LIB libspdk_event_scsi.a
00:05:13.841    LIB libspdk_event_nbd.a
00:05:13.841    SO libspdk_event_ublk.so.3.0
00:05:13.841    SO libspdk_event_scsi.so.6.0
00:05:13.841    SO libspdk_event_nbd.so.6.0
00:05:14.100    SYMLINK libspdk_event_ublk.so
00:05:14.100    SYMLINK libspdk_event_scsi.so
00:05:14.100    SYMLINK libspdk_event_nbd.so
00:05:14.100    LIB libspdk_event_nvmf.a
00:05:14.100    SO libspdk_event_nvmf.so.6.0
00:05:14.100    SYMLINK libspdk_event_nvmf.so
00:05:14.358    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:05:14.358    CC module/event/subsystems/iscsi/iscsi.o
00:05:14.641    LIB libspdk_event_vhost_scsi.a
00:05:14.641    LIB libspdk_event_iscsi.a
00:05:14.641    SO libspdk_event_vhost_scsi.so.3.0
00:05:14.641    SO libspdk_event_iscsi.so.6.0
00:05:14.641    SYMLINK libspdk_event_vhost_scsi.so
00:05:14.641    SYMLINK libspdk_event_iscsi.so
00:05:14.900    SO libspdk.so.6.0
00:05:14.900    SYMLINK libspdk.so
00:05:15.468    CXX app/trace/trace.o
00:05:15.468    CC app/spdk_nvme_identify/identify.o
00:05:15.468    CC app/trace_record/trace_record.o
00:05:15.468    CC app/spdk_nvme_perf/perf.o
00:05:15.468    CC app/spdk_lspci/spdk_lspci.o
00:05:15.468    CC app/nvmf_tgt/nvmf_main.o
00:05:15.468    CC app/iscsi_tgt/iscsi_tgt.o
00:05:15.468    CC app/spdk_tgt/spdk_tgt.o
00:05:15.468    CC test/thread/poller_perf/poller_perf.o
00:05:15.468    CC examples/util/zipf/zipf.o
00:05:15.468    LINK spdk_lspci
00:05:15.468    LINK nvmf_tgt
00:05:15.468    LINK iscsi_tgt
00:05:15.468    LINK spdk_tgt
00:05:15.468    LINK spdk_trace_record
00:05:15.468    LINK poller_perf
00:05:15.468    LINK zipf
00:05:15.727    CC app/spdk_nvme_discover/discovery_aer.o
00:05:15.727    LINK spdk_trace
00:05:15.727    CC app/spdk_top/spdk_top.o
00:05:15.727    CC app/spdk_dd/spdk_dd.o
00:05:15.985    LINK spdk_nvme_discover
00:05:15.985    CC examples/ioat/perf/perf.o
00:05:15.985    CC test/dma/test_dma/test_dma.o
00:05:15.985    CC examples/vmd/lsvmd/lsvmd.o
00:05:15.985    CC examples/idxd/perf/perf.o
00:05:15.985    CC examples/interrupt_tgt/interrupt_tgt.o
00:05:15.985    LINK lsvmd
00:05:16.242    LINK ioat_perf
00:05:16.242    LINK interrupt_tgt
00:05:16.242    LINK spdk_dd
00:05:16.242    LINK spdk_nvme_perf
00:05:16.242    LINK spdk_nvme_identify
00:05:16.242    CC examples/thread/thread/thread_ex.o
00:05:16.242    LINK idxd_perf
00:05:16.242    CC examples/vmd/led/led.o
00:05:16.501    CC examples/ioat/verify/verify.o
00:05:16.501    LINK test_dma
00:05:16.501    LINK thread
00:05:16.501    LINK led
00:05:16.501    CC examples/sock/hello_world/hello_sock.o
00:05:16.501    TEST_HEADER include/spdk/accel.h
00:05:16.501    TEST_HEADER include/spdk/accel_module.h
00:05:16.501    TEST_HEADER include/spdk/assert.h
00:05:16.501    TEST_HEADER include/spdk/barrier.h
00:05:16.501    TEST_HEADER include/spdk/base64.h
00:05:16.501    TEST_HEADER include/spdk/bdev.h
00:05:16.501    TEST_HEADER include/spdk/bdev_module.h
00:05:16.501    CC app/vhost/vhost.o
00:05:16.501    TEST_HEADER include/spdk/bdev_zone.h
00:05:16.501    TEST_HEADER include/spdk/bit_array.h
00:05:16.501    TEST_HEADER include/spdk/bit_pool.h
00:05:16.501    TEST_HEADER include/spdk/blob_bdev.h
00:05:16.501    TEST_HEADER include/spdk/blobfs_bdev.h
00:05:16.501    TEST_HEADER include/spdk/blobfs.h
00:05:16.501    TEST_HEADER include/spdk/blob.h
00:05:16.501    TEST_HEADER include/spdk/conf.h
00:05:16.501    TEST_HEADER include/spdk/config.h
00:05:16.501    TEST_HEADER include/spdk/cpuset.h
00:05:16.501    TEST_HEADER include/spdk/crc16.h
00:05:16.501    TEST_HEADER include/spdk/crc32.h
00:05:16.501    TEST_HEADER include/spdk/crc64.h
00:05:16.501    TEST_HEADER include/spdk/dif.h
00:05:16.501    LINK verify
00:05:16.501    TEST_HEADER include/spdk/dma.h
00:05:16.501    TEST_HEADER include/spdk/endian.h
00:05:16.501    TEST_HEADER include/spdk/env_dpdk.h
00:05:16.501    TEST_HEADER include/spdk/env.h
00:05:16.501    TEST_HEADER include/spdk/event.h
00:05:16.501    TEST_HEADER include/spdk/fd_group.h
00:05:16.501    CC app/fio/nvme/fio_plugin.o
00:05:16.501    TEST_HEADER include/spdk/fd.h
00:05:16.501    TEST_HEADER include/spdk/file.h
00:05:16.760    TEST_HEADER include/spdk/fsdev.h
00:05:16.760    TEST_HEADER include/spdk/fsdev_module.h
00:05:16.760    TEST_HEADER include/spdk/ftl.h
00:05:16.760    TEST_HEADER include/spdk/fuse_dispatcher.h
00:05:16.760    TEST_HEADER include/spdk/gpt_spec.h
00:05:16.760    TEST_HEADER include/spdk/hexlify.h
00:05:16.760    TEST_HEADER include/spdk/histogram_data.h
00:05:16.760    TEST_HEADER include/spdk/idxd.h
00:05:16.760    TEST_HEADER include/spdk/idxd_spec.h
00:05:16.760    TEST_HEADER include/spdk/init.h
00:05:16.760    TEST_HEADER include/spdk/ioat.h
00:05:16.760    TEST_HEADER include/spdk/ioat_spec.h
00:05:16.760    TEST_HEADER include/spdk/iscsi_spec.h
00:05:16.760    TEST_HEADER include/spdk/json.h
00:05:16.760    TEST_HEADER include/spdk/jsonrpc.h
00:05:16.760    TEST_HEADER include/spdk/keyring.h
00:05:16.760    TEST_HEADER include/spdk/keyring_module.h
00:05:16.760    TEST_HEADER include/spdk/likely.h
00:05:16.760    TEST_HEADER include/spdk/log.h
00:05:16.760    TEST_HEADER include/spdk/lvol.h
00:05:16.760    TEST_HEADER include/spdk/md5.h
00:05:16.760    TEST_HEADER include/spdk/memory.h
00:05:16.760    TEST_HEADER include/spdk/mmio.h
00:05:16.760    CC test/app/bdev_svc/bdev_svc.o
00:05:16.760    TEST_HEADER include/spdk/nbd.h
00:05:16.760    TEST_HEADER include/spdk/net.h
00:05:16.760    TEST_HEADER include/spdk/notify.h
00:05:16.760    TEST_HEADER include/spdk/nvme.h
00:05:16.760    TEST_HEADER include/spdk/nvme_intel.h
00:05:16.760    TEST_HEADER include/spdk/nvme_ocssd.h
00:05:16.760    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:05:16.760    TEST_HEADER include/spdk/nvme_spec.h
00:05:16.760    TEST_HEADER include/spdk/nvme_zns.h
00:05:16.760    TEST_HEADER include/spdk/nvmf_cmd.h
00:05:16.760    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:05:16.760    TEST_HEADER include/spdk/nvmf.h
00:05:16.760    TEST_HEADER include/spdk/nvmf_spec.h
00:05:16.760    TEST_HEADER include/spdk/nvmf_transport.h
00:05:16.760    TEST_HEADER include/spdk/opal.h
00:05:16.760    TEST_HEADER include/spdk/opal_spec.h
00:05:16.760    TEST_HEADER include/spdk/pci_ids.h
00:05:16.760    TEST_HEADER include/spdk/pipe.h
00:05:16.760    TEST_HEADER include/spdk/queue.h
00:05:16.760    TEST_HEADER include/spdk/reduce.h
00:05:16.760    TEST_HEADER include/spdk/rpc.h
00:05:16.760    TEST_HEADER include/spdk/scheduler.h
00:05:16.760    TEST_HEADER include/spdk/scsi.h
00:05:16.760    TEST_HEADER include/spdk/scsi_spec.h
00:05:16.760    TEST_HEADER include/spdk/sock.h
00:05:16.760    TEST_HEADER include/spdk/stdinc.h
00:05:16.760    TEST_HEADER include/spdk/string.h
00:05:16.760    TEST_HEADER include/spdk/thread.h
00:05:16.760    TEST_HEADER include/spdk/trace.h
00:05:16.760    TEST_HEADER include/spdk/trace_parser.h
00:05:16.760    TEST_HEADER include/spdk/tree.h
00:05:16.760    TEST_HEADER include/spdk/ublk.h
00:05:16.760    TEST_HEADER include/spdk/util.h
00:05:16.760    TEST_HEADER include/spdk/uuid.h
00:05:16.760    TEST_HEADER include/spdk/version.h
00:05:16.760    TEST_HEADER include/spdk/vfio_user_pci.h
00:05:16.760    TEST_HEADER include/spdk/vfio_user_spec.h
00:05:16.760    LINK spdk_top
00:05:16.760    TEST_HEADER include/spdk/vhost.h
00:05:16.760    TEST_HEADER include/spdk/vmd.h
00:05:16.760    TEST_HEADER include/spdk/xor.h
00:05:16.760    TEST_HEADER include/spdk/zipf.h
00:05:16.760    CXX test/cpp_headers/accel.o
00:05:16.760    LINK vhost
00:05:16.760    LINK hello_sock
00:05:16.760    CC app/fio/bdev/fio_plugin.o
00:05:16.760    LINK bdev_svc
00:05:17.020    CC test/event/event_perf/event_perf.o
00:05:17.020    CXX test/cpp_headers/accel_module.o
00:05:17.020    CC test/env/mem_callbacks/mem_callbacks.o
00:05:17.020    CC examples/accel/perf/accel_perf.o
00:05:17.020    LINK event_perf
00:05:17.020    CXX test/cpp_headers/assert.o
00:05:17.020    CC examples/blob/hello_world/hello_blob.o
00:05:17.279    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:05:17.279    CC examples/fsdev/hello_world/hello_fsdev.o
00:05:17.279    LINK spdk_nvme
00:05:17.279    CC examples/nvme/hello_world/hello_world.o
00:05:17.279    CXX test/cpp_headers/barrier.o
00:05:17.279    CC test/event/reactor/reactor.o
00:05:17.279    LINK spdk_bdev
00:05:17.279    LINK hello_blob
00:05:17.279    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:05:17.538    LINK accel_perf
00:05:17.538    LINK mem_callbacks
00:05:17.538    CXX test/cpp_headers/base64.o
00:05:17.538    CXX test/cpp_headers/bdev.o
00:05:17.538    LINK hello_world
00:05:17.538    LINK reactor
00:05:17.538    LINK hello_fsdev
00:05:17.538    LINK nvme_fuzz
00:05:17.797    CXX test/cpp_headers/bdev_module.o
00:05:17.797    CC test/env/vtophys/vtophys.o
00:05:17.797    CC examples/blob/cli/blobcli.o
00:05:17.797    CC test/event/reactor_perf/reactor_perf.o
00:05:17.797    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:05:17.797    CC examples/nvme/reconnect/reconnect.o
00:05:17.797    CC examples/nvme/nvme_manage/nvme_manage.o
00:05:17.797    LINK vtophys
00:05:17.797    CC examples/nvme/arbitration/arbitration.o
00:05:17.797    LINK reactor_perf
00:05:17.797    LINK env_dpdk_post_init
00:05:17.797    CXX test/cpp_headers/bdev_zone.o
00:05:18.055    CC examples/bdev/hello_world/hello_bdev.o
00:05:18.055    CXX test/cpp_headers/bit_array.o
00:05:18.055    LINK reconnect
00:05:18.055    CC examples/bdev/bdevperf/bdevperf.o
00:05:18.055    CC test/event/app_repeat/app_repeat.o
00:05:18.055    CC test/env/memory/memory_ut.o
00:05:18.314    LINK arbitration
00:05:18.314    LINK blobcli
00:05:18.314    CXX test/cpp_headers/bit_pool.o
00:05:18.314    LINK hello_bdev
00:05:18.314    LINK app_repeat
00:05:18.314    LINK nvme_manage
00:05:18.314    CXX test/cpp_headers/blob_bdev.o
00:05:18.573    CC test/env/pci/pci_ut.o
00:05:18.573    CXX test/cpp_headers/blobfs_bdev.o
00:05:18.573    CC test/app/histogram_perf/histogram_perf.o
00:05:18.573    CXX test/cpp_headers/blobfs.o
00:05:18.573    CC examples/nvme/hotplug/hotplug.o
00:05:18.573    CC test/event/scheduler/scheduler.o
00:05:18.573    LINK histogram_perf
00:05:18.573    CXX test/cpp_headers/blob.o
00:05:18.832    CC test/app/jsoncat/jsoncat.o
00:05:18.832    CC test/app/stub/stub.o
00:05:18.832    LINK scheduler
00:05:18.832    CXX test/cpp_headers/conf.o
00:05:18.832    LINK hotplug
00:05:18.832    LINK jsoncat
00:05:18.832    CC examples/nvme/cmb_copy/cmb_copy.o
00:05:18.832    LINK pci_ut
00:05:18.832    LINK stub
00:05:19.090    CXX test/cpp_headers/config.o
00:05:19.090    CXX test/cpp_headers/cpuset.o
00:05:19.090    LINK bdevperf
00:05:19.090    CXX test/cpp_headers/crc16.o
00:05:19.090    CXX test/cpp_headers/crc32.o
00:05:19.090    LINK cmb_copy
00:05:19.090    CXX test/cpp_headers/crc64.o
00:05:19.090    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:05:19.090    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:05:19.090    CXX test/cpp_headers/dif.o
00:05:19.349    CXX test/cpp_headers/dma.o
00:05:19.349    CC test/rpc_client/rpc_client_test.o
00:05:19.349    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:05:19.349    CC examples/nvme/abort/abort.o
00:05:19.349    LINK memory_ut
00:05:19.349    LINK iscsi_fuzz
00:05:19.349    CC test/nvme/aer/aer.o
00:05:19.349    CC test/accel/dif/dif.o
00:05:19.349    CXX test/cpp_headers/endian.o
00:05:19.349    LINK rpc_client_test
00:05:19.349    LINK pmr_persistence
00:05:19.349    CC test/nvme/reset/reset.o
00:05:19.607    LINK vhost_fuzz
00:05:19.607    CXX test/cpp_headers/env_dpdk.o
00:05:19.607    CXX test/cpp_headers/env.o
00:05:19.607    CC test/nvme/sgl/sgl.o
00:05:19.607    CXX test/cpp_headers/event.o
00:05:19.607    CC test/nvme/e2edp/nvme_dp.o
00:05:19.607    LINK aer
00:05:19.607    LINK abort
00:05:19.607    LINK reset
00:05:19.607    CXX test/cpp_headers/fd_group.o
00:05:19.866    CXX test/cpp_headers/fd.o
00:05:19.866    LINK sgl
00:05:19.866    CC test/blobfs/mkfs/mkfs.o
00:05:19.866    LINK nvme_dp
00:05:19.866    CXX test/cpp_headers/file.o
00:05:19.866    CC test/nvme/overhead/overhead.o
00:05:19.866    CC test/nvme/err_injection/err_injection.o
00:05:19.866    CC test/nvme/startup/startup.o
00:05:19.866    CC test/lvol/esnap/esnap.o
00:05:20.125    LINK mkfs
00:05:20.125    CC examples/nvmf/nvmf/nvmf.o
00:05:20.125    LINK dif
00:05:20.125    CXX test/cpp_headers/fsdev.o
00:05:20.125    CC test/nvme/reserve/reserve.o
00:05:20.125    LINK startup
00:05:20.125    LINK err_injection
00:05:20.125    CC test/nvme/simple_copy/simple_copy.o
00:05:20.125    LINK overhead
00:05:20.125    CXX test/cpp_headers/fsdev_module.o
00:05:20.125    CXX test/cpp_headers/ftl.o
00:05:20.384    CXX test/cpp_headers/fuse_dispatcher.o
00:05:20.384    CXX test/cpp_headers/gpt_spec.o
00:05:20.384    LINK reserve
00:05:20.384    CXX test/cpp_headers/hexlify.o
00:05:20.384    CXX test/cpp_headers/histogram_data.o
00:05:20.384    LINK nvmf
00:05:20.384    LINK simple_copy
00:05:20.384    CXX test/cpp_headers/idxd.o
00:05:20.384    CXX test/cpp_headers/idxd_spec.o
00:05:20.384    CXX test/cpp_headers/init.o
00:05:20.384    CC test/nvme/connect_stress/connect_stress.o
00:05:20.643    CC test/nvme/boot_partition/boot_partition.o
00:05:20.643    CXX test/cpp_headers/ioat.o
00:05:20.643    CXX test/cpp_headers/ioat_spec.o
00:05:20.643    CXX test/cpp_headers/iscsi_spec.o
00:05:20.643    CC test/nvme/compliance/nvme_compliance.o
00:05:20.643    CC test/nvme/fused_ordering/fused_ordering.o
00:05:20.643    CC test/bdev/bdevio/bdevio.o
00:05:20.643    LINK connect_stress
00:05:20.643    LINK boot_partition
00:05:20.643    CC test/nvme/doorbell_aers/doorbell_aers.o
00:05:20.902    CXX test/cpp_headers/json.o
00:05:20.902    CC test/nvme/fdp/fdp.o
00:05:20.902    CXX test/cpp_headers/jsonrpc.o
00:05:20.902    CXX test/cpp_headers/keyring.o
00:05:20.902    LINK fused_ordering
00:05:20.902    LINK doorbell_aers
00:05:20.902    CC test/nvme/cuse/cuse.o
00:05:20.902    CXX test/cpp_headers/keyring_module.o
00:05:20.902    CXX test/cpp_headers/likely.o
00:05:20.902    LINK nvme_compliance
00:05:20.902    CXX test/cpp_headers/log.o
00:05:20.902    CXX test/cpp_headers/lvol.o
00:05:21.161    LINK bdevio
00:05:21.161    CXX test/cpp_headers/md5.o
00:05:21.161    CXX test/cpp_headers/memory.o
00:05:21.161    CXX test/cpp_headers/mmio.o
00:05:21.161    CXX test/cpp_headers/nbd.o
00:05:21.161    CXX test/cpp_headers/net.o
00:05:21.161    CXX test/cpp_headers/notify.o
00:05:21.161    LINK fdp
00:05:21.161    CXX test/cpp_headers/nvme.o
00:05:21.420    CXX test/cpp_headers/nvme_intel.o
00:05:21.420    CXX test/cpp_headers/nvme_ocssd.o
00:05:21.420    CXX test/cpp_headers/nvme_ocssd_spec.o
00:05:21.420    CXX test/cpp_headers/nvme_spec.o
00:05:21.420    CXX test/cpp_headers/nvme_zns.o
00:05:21.420    CXX test/cpp_headers/nvmf_cmd.o
00:05:21.420    CXX test/cpp_headers/nvmf_fc_spec.o
00:05:21.420    CXX test/cpp_headers/nvmf.o
00:05:21.420    CXX test/cpp_headers/nvmf_spec.o
00:05:21.420    CXX test/cpp_headers/nvmf_transport.o
00:05:21.420    CXX test/cpp_headers/opal.o
00:05:21.420    CXX test/cpp_headers/opal_spec.o
00:05:21.420    CXX test/cpp_headers/pci_ids.o
00:05:21.420    CXX test/cpp_headers/pipe.o
00:05:21.420    CXX test/cpp_headers/queue.o
00:05:21.679    CXX test/cpp_headers/reduce.o
00:05:21.679    CXX test/cpp_headers/rpc.o
00:05:21.679    CXX test/cpp_headers/scheduler.o
00:05:21.679    CXX test/cpp_headers/scsi.o
00:05:21.679    CXX test/cpp_headers/scsi_spec.o
00:05:21.679    CXX test/cpp_headers/sock.o
00:05:21.679    CXX test/cpp_headers/stdinc.o
00:05:21.679    CXX test/cpp_headers/string.o
00:05:21.679    CXX test/cpp_headers/thread.o
00:05:21.679    CXX test/cpp_headers/trace.o
00:05:21.679    CXX test/cpp_headers/trace_parser.o
00:05:21.938    CXX test/cpp_headers/tree.o
00:05:21.938    CXX test/cpp_headers/ublk.o
00:05:21.938    CXX test/cpp_headers/util.o
00:05:21.938    CXX test/cpp_headers/uuid.o
00:05:21.938    CXX test/cpp_headers/version.o
00:05:21.938    CXX test/cpp_headers/vfio_user_pci.o
00:05:21.938    CXX test/cpp_headers/vfio_user_spec.o
00:05:21.938    CXX test/cpp_headers/vhost.o
00:05:21.938    CXX test/cpp_headers/vmd.o
00:05:21.938    CXX test/cpp_headers/xor.o
00:05:21.938    CXX test/cpp_headers/zipf.o
00:05:22.197    LINK cuse
00:05:26.392    LINK esnap
00:05:26.392  
00:05:26.392  real	1m26.194s
00:05:26.392  user	7m6.806s
00:05:26.392  sys	1m54.275s
00:05:26.392   16:15:55 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:05:26.392   16:15:55 make -- common/autotest_common.sh@10 -- $ set +x
00:05:26.392  ************************************
00:05:26.392  END TEST make
00:05:26.392  ************************************
00:05:26.392   16:15:55  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:05:26.392   16:15:55  -- pm/common@29 -- $ signal_monitor_resources TERM
00:05:26.392   16:15:55  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:05:26.392   16:15:55  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:26.392   16:15:55  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]]
00:05:26.392   16:15:55  -- pm/common@44 -- $ pid=5283
00:05:26.392   16:15:55  -- pm/common@50 -- $ kill -TERM 5283
00:05:26.392   16:15:55  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:26.392   16:15:55  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]]
00:05:26.392   16:15:55  -- pm/common@44 -- $ pid=5285
00:05:26.392   16:15:55  -- pm/common@50 -- $ kill -TERM 5285
00:05:26.392   16:15:55  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:05:26.392   16:15:55  -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:05:26.392    16:15:55  -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:26.392     16:15:55  -- common/autotest_common.sh@1711 -- # lcov --version
00:05:26.392     16:15:55  -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:26.393    16:15:55  -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:26.393    16:15:55  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:26.393    16:15:55  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:26.393    16:15:55  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:26.393    16:15:55  -- scripts/common.sh@336 -- # IFS=.-:
00:05:26.393    16:15:55  -- scripts/common.sh@336 -- # read -ra ver1
00:05:26.393    16:15:55  -- scripts/common.sh@337 -- # IFS=.-:
00:05:26.393    16:15:55  -- scripts/common.sh@337 -- # read -ra ver2
00:05:26.393    16:15:55  -- scripts/common.sh@338 -- # local 'op=<'
00:05:26.393    16:15:55  -- scripts/common.sh@340 -- # ver1_l=2
00:05:26.393    16:15:55  -- scripts/common.sh@341 -- # ver2_l=1
00:05:26.393    16:15:55  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:26.393    16:15:55  -- scripts/common.sh@344 -- # case "$op" in
00:05:26.393    16:15:55  -- scripts/common.sh@345 -- # : 1
00:05:26.393    16:15:55  -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:26.393    16:15:55  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:26.393     16:15:55  -- scripts/common.sh@365 -- # decimal 1
00:05:26.393     16:15:55  -- scripts/common.sh@353 -- # local d=1
00:05:26.393     16:15:55  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:26.393     16:15:55  -- scripts/common.sh@355 -- # echo 1
00:05:26.393    16:15:55  -- scripts/common.sh@365 -- # ver1[v]=1
00:05:26.393     16:15:55  -- scripts/common.sh@366 -- # decimal 2
00:05:26.393     16:15:55  -- scripts/common.sh@353 -- # local d=2
00:05:26.393     16:15:55  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:26.393     16:15:55  -- scripts/common.sh@355 -- # echo 2
00:05:26.393    16:15:55  -- scripts/common.sh@366 -- # ver2[v]=2
00:05:26.393    16:15:55  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:26.393    16:15:55  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:26.393    16:15:55  -- scripts/common.sh@368 -- # return 0
00:05:26.393    16:15:55  -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:26.393    16:15:55  -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:26.393  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.393  		--rc genhtml_branch_coverage=1
00:05:26.393  		--rc genhtml_function_coverage=1
00:05:26.393  		--rc genhtml_legend=1
00:05:26.393  		--rc geninfo_all_blocks=1
00:05:26.393  		--rc geninfo_unexecuted_blocks=1
00:05:26.393  		
00:05:26.393  		'
00:05:26.393    16:15:55  -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:26.393  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.393  		--rc genhtml_branch_coverage=1
00:05:26.393  		--rc genhtml_function_coverage=1
00:05:26.393  		--rc genhtml_legend=1
00:05:26.393  		--rc geninfo_all_blocks=1
00:05:26.393  		--rc geninfo_unexecuted_blocks=1
00:05:26.393  		
00:05:26.393  		'
00:05:26.393    16:15:55  -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:26.393  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.393  		--rc genhtml_branch_coverage=1
00:05:26.393  		--rc genhtml_function_coverage=1
00:05:26.393  		--rc genhtml_legend=1
00:05:26.393  		--rc geninfo_all_blocks=1
00:05:26.393  		--rc geninfo_unexecuted_blocks=1
00:05:26.393  		
00:05:26.393  		'
00:05:26.393    16:15:55  -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:26.393  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.393  		--rc genhtml_branch_coverage=1
00:05:26.393  		--rc genhtml_function_coverage=1
00:05:26.393  		--rc genhtml_legend=1
00:05:26.393  		--rc geninfo_all_blocks=1
00:05:26.393  		--rc geninfo_unexecuted_blocks=1
00:05:26.393  		
00:05:26.393  		'
00:05:26.393   16:15:55  -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:05:26.393     16:15:55  -- nvmf/common.sh@7 -- # uname -s
00:05:26.393    16:15:55  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:26.393    16:15:55  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:26.393    16:15:55  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:26.393    16:15:55  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:26.393    16:15:55  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:26.393    16:15:55  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:26.393    16:15:55  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:26.393    16:15:55  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:26.393    16:15:55  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:26.393     16:15:55  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:26.393    16:15:55  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:762461a4-9ecc-4976-86b8-dcd6ce49c43f
00:05:26.393    16:15:55  -- nvmf/common.sh@18 -- # NVME_HOSTID=762461a4-9ecc-4976-86b8-dcd6ce49c43f
00:05:26.393    16:15:55  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:26.393    16:15:55  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:26.393    16:15:55  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:26.393    16:15:55  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:26.393    16:15:55  -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:05:26.393     16:15:55  -- scripts/common.sh@15 -- # shopt -s extglob
00:05:26.393     16:15:55  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:26.393     16:15:55  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:26.393     16:15:55  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:26.393      16:15:55  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.393      16:15:55  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.393      16:15:55  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.393      16:15:55  -- paths/export.sh@5 -- # export PATH
00:05:26.393      16:15:55  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.393    16:15:55  -- nvmf/common.sh@51 -- # : 0
00:05:26.393    16:15:55  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:26.393    16:15:55  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:26.393    16:15:55  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:26.393    16:15:55  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:26.393    16:15:55  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:26.393    16:15:55  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:26.393  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:26.393    16:15:55  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:26.393    16:15:55  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:26.393    16:15:55  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:26.393   16:15:55  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:05:26.393    16:15:55  -- spdk/autotest.sh@32 -- # uname -s
00:05:26.393   16:15:55  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:05:26.393   16:15:55  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:05:26.393   16:15:55  -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:05:26.393   16:15:55  -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:05:26.393   16:15:55  -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:05:26.393   16:15:55  -- spdk/autotest.sh@44 -- # modprobe nbd
00:05:26.393    16:15:55  -- spdk/autotest.sh@46 -- # type -P udevadm
00:05:26.393   16:15:55  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:05:26.393   16:15:55  -- spdk/autotest.sh@48 -- # udevadm_pid=55982
00:05:26.393   16:15:55  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:05:26.393   16:15:55  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:05:26.393   16:15:55  -- pm/common@17 -- # local monitor
00:05:26.393   16:15:55  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:26.393   16:15:55  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:26.393   16:15:55  -- pm/common@25 -- # sleep 1
00:05:26.393    16:15:55  -- pm/common@21 -- # date +%s
00:05:26.393    16:15:55  -- pm/common@21 -- # date +%s
00:05:26.393   16:15:55  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733760955
00:05:26.393   16:15:55  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733760955
00:05:26.393  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733760955_collect-cpu-load.pm.log
00:05:26.393  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733760955_collect-vmstat.pm.log
00:05:27.771   16:15:56  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:05:27.771   16:15:56  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:05:27.771   16:15:56  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:27.771   16:15:56  -- common/autotest_common.sh@10 -- # set +x
00:05:27.771   16:15:56  -- spdk/autotest.sh@59 -- # create_test_list
00:05:27.771   16:15:56  -- common/autotest_common.sh@752 -- # xtrace_disable
00:05:27.771   16:15:56  -- common/autotest_common.sh@10 -- # set +x
00:05:27.771     16:15:56  -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:05:27.771    16:15:56  -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:05:27.771   16:15:56  -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk
00:05:27.771   16:15:56  -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:05:27.771   16:15:56  -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk
00:05:27.771   16:15:56  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:05:27.771    16:15:56  -- common/autotest_common.sh@1457 -- # uname
00:05:27.771   16:15:56  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:05:27.771   16:15:56  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:05:27.771    16:15:56  -- common/autotest_common.sh@1477 -- # uname
00:05:27.772   16:15:56  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:05:27.772   16:15:56  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:05:27.772   16:15:56  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:05:27.772  lcov: LCOV version 1.15
00:05:27.772   16:15:56  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:05:42.660  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:05:42.660  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:05:57.553   16:16:26  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:05:57.553   16:16:26  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:57.553   16:16:26  -- common/autotest_common.sh@10 -- # set +x
00:05:57.553   16:16:26  -- spdk/autotest.sh@78 -- # rm -f
00:05:57.553   16:16:26  -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:05:58.122  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:05:58.692  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:05:58.692  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:05:58.692  0000:00:12.0 (1b36 0010): Already using the nvme driver
00:05:58.692  0000:00:13.0 (1b36 0010): Already using the nvme driver
00:05:58.692   16:16:27  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:05:58.692   16:16:27  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:05:58.692   16:16:27  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:05:58.692   16:16:27  -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:05:58.692   16:16:27  -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:05:58.692   16:16:27  -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:05:58.692   16:16:27  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:05:58.692   16:16:27  -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0
00:05:58.692   16:16:27  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:05:58.692   16:16:27  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:05:58.692   16:16:27  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:05:58.692   16:16:27  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:05:58.692   16:16:27  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:58.692   16:16:27  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:05:58.692   16:16:27  -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0
00:05:58.692   16:16:27  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:05:58.692   16:16:27  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1
00:05:58.692   16:16:27  -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:05:58.692   16:16:27  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:05:58.692   16:16:27  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:58.692   16:16:27  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:05:58.692   16:16:27  -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0
00:05:58.692   16:16:27  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:05:58.692   16:16:27  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1
00:05:58.692   16:16:27  -- common/autotest_common.sh@1650 -- # local device=nvme2n1
00:05:58.692   16:16:27  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]]
00:05:58.692   16:16:27  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:58.692   16:16:27  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:05:58.692   16:16:27  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2
00:05:58.692   16:16:27  -- common/autotest_common.sh@1650 -- # local device=nvme2n2
00:05:58.692   16:16:27  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]]
00:05:58.692   16:16:27  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:58.692   16:16:27  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:05:58.692   16:16:27  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3
00:05:58.692   16:16:27  -- common/autotest_common.sh@1650 -- # local device=nvme2n3
00:05:58.692   16:16:27  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]]
00:05:58.692   16:16:27  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:58.692   16:16:27  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:05:58.692   16:16:27  -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0
00:05:58.692   16:16:27  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:05:58.692   16:16:27  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1
00:05:58.692   16:16:27  -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1
00:05:58.692   16:16:27  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]]
00:05:58.692   16:16:27  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:58.692   16:16:27  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:05:58.692   16:16:27  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:58.692   16:16:27  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:58.692   16:16:27  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:05:58.692   16:16:27  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:05:58.692   16:16:27  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:05:58.953  No valid GPT data, bailing
00:05:58.953    16:16:27  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:05:58.953   16:16:27  -- scripts/common.sh@394 -- # pt=
00:05:58.953   16:16:27  -- scripts/common.sh@395 -- # return 1
00:05:58.953   16:16:27  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:05:58.953  1+0 records in
00:05:58.953  1+0 records out
00:05:58.953  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01954 s, 53.7 MB/s
00:05:58.953   16:16:27  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:58.953   16:16:27  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:58.953   16:16:27  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1
00:05:58.953   16:16:27  -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt
00:05:58.953   16:16:27  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1
00:05:58.953  No valid GPT data, bailing
00:05:58.953    16:16:28  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:05:58.953   16:16:28  -- scripts/common.sh@394 -- # pt=
00:05:58.953   16:16:28  -- scripts/common.sh@395 -- # return 1
00:05:58.953   16:16:28  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1
00:05:58.953  1+0 records in
00:05:58.953  1+0 records out
00:05:58.953  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0034735 s, 302 MB/s
00:05:58.953   16:16:28  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:58.953   16:16:28  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:58.953   16:16:28  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1
00:05:58.953   16:16:28  -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt
00:05:58.953   16:16:28  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1
00:05:58.953  No valid GPT data, bailing
00:05:58.953    16:16:28  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1
00:05:58.953   16:16:28  -- scripts/common.sh@394 -- # pt=
00:05:58.953   16:16:28  -- scripts/common.sh@395 -- # return 1
00:05:58.953   16:16:28  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1
00:05:58.953  1+0 records in
00:05:58.953  1+0 records out
00:05:58.953  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611434 s, 171 MB/s
00:05:58.953   16:16:28  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:58.953   16:16:28  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:58.953   16:16:28  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2
00:05:58.953   16:16:28  -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt
00:05:58.953   16:16:28  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2
00:05:59.213  No valid GPT data, bailing
00:05:59.213    16:16:28  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2
00:05:59.213   16:16:28  -- scripts/common.sh@394 -- # pt=
00:05:59.213   16:16:28  -- scripts/common.sh@395 -- # return 1
00:05:59.213   16:16:28  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1
00:05:59.213  1+0 records in
00:05:59.213  1+0 records out
00:05:59.213  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595915 s, 176 MB/s
00:05:59.213   16:16:28  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:59.213   16:16:28  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:59.213   16:16:28  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3
00:05:59.213   16:16:28  -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt
00:05:59.213   16:16:28  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3
00:05:59.213  No valid GPT data, bailing
00:05:59.213    16:16:28  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3
00:05:59.213   16:16:28  -- scripts/common.sh@394 -- # pt=
00:05:59.213   16:16:28  -- scripts/common.sh@395 -- # return 1
00:05:59.213   16:16:28  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1
00:05:59.213  1+0 records in
00:05:59.213  1+0 records out
00:05:59.213  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00609399 s, 172 MB/s
00:05:59.213   16:16:28  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:59.213   16:16:28  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:59.213   16:16:28  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1
00:05:59.213   16:16:28  -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt
00:05:59.213   16:16:28  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1
00:05:59.213  No valid GPT data, bailing
00:05:59.213    16:16:28  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1
00:05:59.213   16:16:28  -- scripts/common.sh@394 -- # pt=
00:05:59.213   16:16:28  -- scripts/common.sh@395 -- # return 1
00:05:59.213   16:16:28  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1
00:05:59.213  1+0 records in
00:05:59.213  1+0 records out
00:05:59.213  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00646887 s, 162 MB/s
00:05:59.213   16:16:28  -- spdk/autotest.sh@105 -- # sync
00:05:59.472   16:16:28  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:05:59.472   16:16:28  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:05:59.472    16:16:28  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:06:02.771    16:16:31  -- spdk/autotest.sh@111 -- # uname -s
00:06:02.771   16:16:31  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:06:02.771   16:16:31  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:06:02.771   16:16:31  -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:06:03.031  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:03.603  Hugepages
00:06:03.603  node     hugesize     free /  total
00:06:03.603  node0   1048576kB        0 /      0
00:06:03.603  node0      2048kB        0 /      0
00:06:03.603  
00:06:03.603  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:06:03.603  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:06:03.863  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:06:03.863  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1
00:06:04.122  NVMe                      0000:00:12.0    1b36   0010   unknown nvme             nvme2      nvme2n1 nvme2n2 nvme2n3
00:06:04.122  NVMe                      0000:00:13.0    1b36   0010   unknown nvme             nvme3      nvme3n1
00:06:04.122    16:16:33  -- spdk/autotest.sh@117 -- # uname -s
00:06:04.122   16:16:33  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:06:04.122   16:16:33  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:06:04.122   16:16:33  -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:05.062  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:05.633  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:05.633  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:06:05.633  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:06:05.894  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:06:05.894   16:16:34  -- common/autotest_common.sh@1517 -- # sleep 1
00:06:06.834   16:16:35  -- common/autotest_common.sh@1518 -- # bdfs=()
00:06:06.834   16:16:35  -- common/autotest_common.sh@1518 -- # local bdfs
00:06:06.834   16:16:35  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:06:06.834    16:16:35  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:06:06.834    16:16:35  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:06.834    16:16:35  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:06.834    16:16:35  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:06.834     16:16:35  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:06.834     16:16:35  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:07.094    16:16:36  -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:06:07.094    16:16:36  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:06:07.094   16:16:36  -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:07.664  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:07.924  Waiting for block devices as requested
00:06:07.924  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:06:07.924  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:06:08.184  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:06:08.184  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:06:13.469  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:06:13.470   16:16:42  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:13.470    16:16:42  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0
00:06:13.470     16:16:42  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:06:13.470     16:16:42  -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme
00:06:13.470    16:16:42  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:06:13.470    16:16:42  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]]
00:06:13.470     16:16:42  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:06:13.470    16:16:42  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1
00:06:13.470   16:16:42  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1
00:06:13.470   16:16:42  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]]
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:13.470   16:16:42  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:06:13.470   16:16:42  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:06:13.470   16:16:42  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:06:13.470   16:16:42  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:06:13.470   16:16:42  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:06:13.470   16:16:42  -- common/autotest_common.sh@1543 -- # continue
00:06:13.470   16:16:42  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:13.470    16:16:42  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0
00:06:13.470     16:16:42  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:06:13.470     16:16:42  -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme
00:06:13.470    16:16:42  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:06:13.470    16:16:42  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]]
00:06:13.470     16:16:42  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:06:13.470    16:16:42  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:06:13.470   16:16:42  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:06:13.470   16:16:42  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:13.470   16:16:42  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:06:13.470   16:16:42  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:06:13.470   16:16:42  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:06:13.470   16:16:42  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:06:13.470   16:16:42  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:06:13.470   16:16:42  -- common/autotest_common.sh@1543 -- # continue
00:06:13.470   16:16:42  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:13.470    16:16:42  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0
00:06:13.470     16:16:42  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:06:13.470     16:16:42  -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme
00:06:13.470    16:16:42  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2
00:06:13.470    16:16:42  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]]
00:06:13.470     16:16:42  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2
00:06:13.470    16:16:42  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2
00:06:13.470   16:16:42  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2
00:06:13.470   16:16:42  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]]
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:13.470   16:16:42  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:06:13.470   16:16:42  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:06:13.470   16:16:42  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:06:13.470   16:16:42  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:06:13.470   16:16:42  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:06:13.470   16:16:42  -- common/autotest_common.sh@1543 -- # continue
00:06:13.470   16:16:42  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:13.470    16:16:42  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0
00:06:13.470     16:16:42  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:06:13.470     16:16:42  -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme
00:06:13.470    16:16:42  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3
00:06:13.470    16:16:42  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]]
00:06:13.470     16:16:42  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3
00:06:13.470    16:16:42  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3
00:06:13.470   16:16:42  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3
00:06:13.470   16:16:42  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]]
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:13.470    16:16:42  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:13.470   16:16:42  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:06:13.470   16:16:42  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:06:13.470   16:16:42  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:06:13.470    16:16:42  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:06:13.470   16:16:42  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:06:13.470   16:16:42  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:06:13.470   16:16:42  -- common/autotest_common.sh@1543 -- # continue
00:06:13.470   16:16:42  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:06:13.470   16:16:42  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:13.470   16:16:42  -- common/autotest_common.sh@10 -- # set +x
00:06:13.731   16:16:42  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:06:13.731   16:16:42  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:13.731   16:16:42  -- common/autotest_common.sh@10 -- # set +x
00:06:13.731   16:16:42  -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:14.301  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:15.267  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:06:15.267  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:15.267  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:06:15.267  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:06:15.267   16:16:44  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:06:15.267   16:16:44  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:15.267   16:16:44  -- common/autotest_common.sh@10 -- # set +x
00:06:15.267   16:16:44  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:06:15.267   16:16:44  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:06:15.267    16:16:44  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:06:15.267    16:16:44  -- common/autotest_common.sh@1563 -- # bdfs=()
00:06:15.267    16:16:44  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:06:15.267    16:16:44  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:06:15.267    16:16:44  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:06:15.267     16:16:44  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:06:15.267     16:16:44  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:15.267     16:16:44  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:15.267     16:16:44  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:15.267      16:16:44  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:15.267      16:16:44  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:15.528     16:16:44  -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:06:15.528     16:16:44  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:06:15.528    16:16:44  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:15.528     16:16:44  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device
00:06:15.528    16:16:44  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:15.528    16:16:44  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:15.528    16:16:44  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:15.528     16:16:44  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device
00:06:15.528    16:16:44  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:15.528    16:16:44  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:15.528    16:16:44  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:15.528     16:16:44  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device
00:06:15.528    16:16:44  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:15.528    16:16:44  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:15.528    16:16:44  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:15.528     16:16:44  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device
00:06:15.528    16:16:44  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:15.528    16:16:44  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:15.528    16:16:44  -- common/autotest_common.sh@1572 -- # (( 0 > 0 ))
00:06:15.528    16:16:44  -- common/autotest_common.sh@1572 -- # return 0
00:06:15.528   16:16:44  -- common/autotest_common.sh@1579 -- # [[ -z '' ]]
00:06:15.528   16:16:44  -- common/autotest_common.sh@1580 -- # return 0
00:06:15.528   16:16:44  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:06:15.528   16:16:44  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:06:15.528   16:16:44  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:15.528   16:16:44  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:15.528   16:16:44  -- spdk/autotest.sh@149 -- # timing_enter lib
00:06:15.528   16:16:44  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:15.528   16:16:44  -- common/autotest_common.sh@10 -- # set +x
00:06:15.528   16:16:44  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:06:15.528   16:16:44  -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:06:15.528   16:16:44  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:15.528   16:16:44  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:15.528   16:16:44  -- common/autotest_common.sh@10 -- # set +x
00:06:15.528  ************************************
00:06:15.528  START TEST env
00:06:15.528  ************************************
00:06:15.528   16:16:44 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:06:15.788  * Looking for test storage...
00:06:15.788  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:06:15.788    16:16:44 env -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:15.788     16:16:44 env -- common/autotest_common.sh@1711 -- # lcov --version
00:06:15.788     16:16:44 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:15.788    16:16:44 env -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:15.788    16:16:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:15.788    16:16:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:15.788    16:16:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:15.788    16:16:44 env -- scripts/common.sh@336 -- # IFS=.-:
00:06:15.788    16:16:44 env -- scripts/common.sh@336 -- # read -ra ver1
00:06:15.788    16:16:44 env -- scripts/common.sh@337 -- # IFS=.-:
00:06:15.788    16:16:44 env -- scripts/common.sh@337 -- # read -ra ver2
00:06:15.788    16:16:44 env -- scripts/common.sh@338 -- # local 'op=<'
00:06:15.788    16:16:44 env -- scripts/common.sh@340 -- # ver1_l=2
00:06:15.788    16:16:44 env -- scripts/common.sh@341 -- # ver2_l=1
00:06:15.788    16:16:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:15.788    16:16:44 env -- scripts/common.sh@344 -- # case "$op" in
00:06:15.788    16:16:44 env -- scripts/common.sh@345 -- # : 1
00:06:15.788    16:16:44 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:15.788    16:16:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:15.788     16:16:44 env -- scripts/common.sh@365 -- # decimal 1
00:06:15.788     16:16:44 env -- scripts/common.sh@353 -- # local d=1
00:06:15.788     16:16:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:15.788     16:16:44 env -- scripts/common.sh@355 -- # echo 1
00:06:15.788    16:16:44 env -- scripts/common.sh@365 -- # ver1[v]=1
00:06:15.788     16:16:44 env -- scripts/common.sh@366 -- # decimal 2
00:06:15.788     16:16:44 env -- scripts/common.sh@353 -- # local d=2
00:06:15.788     16:16:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:15.788     16:16:44 env -- scripts/common.sh@355 -- # echo 2
00:06:15.788    16:16:44 env -- scripts/common.sh@366 -- # ver2[v]=2
00:06:15.788    16:16:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:15.788    16:16:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:15.788    16:16:44 env -- scripts/common.sh@368 -- # return 0
00:06:15.788    16:16:44 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:15.788    16:16:44 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:15.788  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:15.788  		--rc genhtml_branch_coverage=1
00:06:15.788  		--rc genhtml_function_coverage=1
00:06:15.788  		--rc genhtml_legend=1
00:06:15.788  		--rc geninfo_all_blocks=1
00:06:15.788  		--rc geninfo_unexecuted_blocks=1
00:06:15.788  		
00:06:15.788  		'
00:06:15.788    16:16:44 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:15.788  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:15.788  		--rc genhtml_branch_coverage=1
00:06:15.788  		--rc genhtml_function_coverage=1
00:06:15.788  		--rc genhtml_legend=1
00:06:15.788  		--rc geninfo_all_blocks=1
00:06:15.788  		--rc geninfo_unexecuted_blocks=1
00:06:15.788  		
00:06:15.788  		'
00:06:15.788    16:16:44 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:15.788  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:15.788  		--rc genhtml_branch_coverage=1
00:06:15.788  		--rc genhtml_function_coverage=1
00:06:15.788  		--rc genhtml_legend=1
00:06:15.788  		--rc geninfo_all_blocks=1
00:06:15.788  		--rc geninfo_unexecuted_blocks=1
00:06:15.788  		
00:06:15.788  		'
00:06:15.788    16:16:44 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:15.788  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:15.788  		--rc genhtml_branch_coverage=1
00:06:15.788  		--rc genhtml_function_coverage=1
00:06:15.788  		--rc genhtml_legend=1
00:06:15.788  		--rc geninfo_all_blocks=1
00:06:15.788  		--rc geninfo_unexecuted_blocks=1
00:06:15.788  		
00:06:15.788  		'
00:06:15.788   16:16:44 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:06:15.788   16:16:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:15.788   16:16:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:15.788   16:16:44 env -- common/autotest_common.sh@10 -- # set +x
00:06:15.788  ************************************
00:06:15.788  START TEST env_memory
00:06:15.788  ************************************
00:06:15.788   16:16:44 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:06:15.788  
00:06:15.788  
00:06:15.788       CUnit - A unit testing framework for C - Version 2.1-3
00:06:15.788       http://cunit.sourceforge.net/
00:06:15.788  
00:06:15.788  
00:06:15.788  Suite: memory
00:06:15.788    Test: alloc and free memory map ...[2024-12-09 16:16:44.901269] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:06:15.788  passed
00:06:15.788    Test: mem map translation ...[2024-12-09 16:16:44.943876] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:06:15.788  [2024-12-09 16:16:44.943923] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:06:15.788  [2024-12-09 16:16:44.943983] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:06:15.788  [2024-12-09 16:16:44.944005] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:06:16.048  passed
00:06:16.048    Test: mem map registration ...[2024-12-09 16:16:45.010254] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:06:16.048  [2024-12-09 16:16:45.010295] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:06:16.048  passed
00:06:16.048    Test: mem map adjacent registrations ...passed
00:06:16.048  
00:06:16.048  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:16.048                suites      1      1    n/a      0        0
00:06:16.048                 tests      4      4      4      0        0
00:06:16.048               asserts    152    152    152      0      n/a
00:06:16.048  
00:06:16.048  Elapsed time =    0.238 seconds
00:06:16.048  
00:06:16.048  real	0m0.291s
00:06:16.048  user	0m0.240s
00:06:16.048  sys	0m0.041s
00:06:16.048   16:16:45 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:16.048   16:16:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:06:16.048  ************************************
00:06:16.048  END TEST env_memory
00:06:16.048  ************************************
00:06:16.048   16:16:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:06:16.048   16:16:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:16.048   16:16:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:16.048   16:16:45 env -- common/autotest_common.sh@10 -- # set +x
00:06:16.048  ************************************
00:06:16.048  START TEST env_vtophys
00:06:16.048  ************************************
00:06:16.048   16:16:45 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:06:16.308  EAL: lib.eal log level changed from notice to debug
00:06:16.308  EAL: Detected lcore 0 as core 0 on socket 0
00:06:16.308  EAL: Detected lcore 1 as core 0 on socket 0
00:06:16.308  EAL: Detected lcore 2 as core 0 on socket 0
00:06:16.308  EAL: Detected lcore 3 as core 0 on socket 0
00:06:16.309  EAL: Detected lcore 4 as core 0 on socket 0
00:06:16.309  EAL: Detected lcore 5 as core 0 on socket 0
00:06:16.309  EAL: Detected lcore 6 as core 0 on socket 0
00:06:16.309  EAL: Detected lcore 7 as core 0 on socket 0
00:06:16.309  EAL: Detected lcore 8 as core 0 on socket 0
00:06:16.309  EAL: Detected lcore 9 as core 0 on socket 0
00:06:16.309  EAL: Maximum logical cores by configuration: 128
00:06:16.309  EAL: Detected CPU lcores: 10
00:06:16.309  EAL: Detected NUMA nodes: 1
00:06:16.309  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:06:16.309  EAL: Detected shared linkage of DPDK
00:06:16.309  EAL: No shared files mode enabled, IPC will be disabled
00:06:16.309  EAL: Selected IOVA mode 'PA'
00:06:16.309  EAL: Probing VFIO support...
00:06:16.309  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:06:16.309  EAL: VFIO modules not loaded, skipping VFIO support...
00:06:16.309  EAL: Ask a virtual area of 0x2e000 bytes
00:06:16.309  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:06:16.309  EAL: Setting up physically contiguous memory...
00:06:16.309  EAL: Setting maximum number of open files to 524288
00:06:16.309  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:06:16.309  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:06:16.309  EAL: Ask a virtual area of 0x61000 bytes
00:06:16.309  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:06:16.309  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:16.309  EAL: Ask a virtual area of 0x400000000 bytes
00:06:16.309  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:06:16.309  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:06:16.309  EAL: Ask a virtual area of 0x61000 bytes
00:06:16.309  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:06:16.309  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:16.309  EAL: Ask a virtual area of 0x400000000 bytes
00:06:16.309  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:06:16.309  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:06:16.309  EAL: Ask a virtual area of 0x61000 bytes
00:06:16.309  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:06:16.309  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:16.309  EAL: Ask a virtual area of 0x400000000 bytes
00:06:16.309  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:06:16.309  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:06:16.309  EAL: Ask a virtual area of 0x61000 bytes
00:06:16.309  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:06:16.309  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:16.309  EAL: Ask a virtual area of 0x400000000 bytes
00:06:16.309  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:06:16.309  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:06:16.309  EAL: Hugepages will be freed exactly as allocated.
00:06:16.309  EAL: No shared files mode enabled, IPC is disabled
00:06:16.309  EAL: No shared files mode enabled, IPC is disabled
00:06:16.309  EAL: TSC frequency is ~2490000 KHz
00:06:16.309  EAL: Main lcore 0 is ready (tid=7f2bfaf40a40;cpuset=[0])
00:06:16.309  EAL: Trying to obtain current memory policy.
00:06:16.309  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:16.309  EAL: Restoring previous memory policy: 0
00:06:16.309  EAL: request: mp_malloc_sync
00:06:16.309  EAL: No shared files mode enabled, IPC is disabled
00:06:16.309  EAL: Heap on socket 0 was expanded by 2MB
00:06:16.309  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:06:16.309  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:06:16.309  EAL: Mem event callback 'spdk:(nil)' registered
00:06:16.309  EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
00:06:16.309  
00:06:16.309  
00:06:16.309       CUnit - A unit testing framework for C - Version 2.1-3
00:06:16.309       http://cunit.sourceforge.net/
00:06:16.309  
00:06:16.309  
00:06:16.309  Suite: components_suite
00:06:16.878    Test: vtophys_malloc_test ...passed
00:06:16.878    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:06:16.878  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:16.878  EAL: Restoring previous memory policy: 4
00:06:16.878  EAL: Calling mem event callback 'spdk:(nil)'
00:06:16.878  EAL: request: mp_malloc_sync
00:06:16.878  EAL: No shared files mode enabled, IPC is disabled
00:06:16.878  EAL: Heap on socket 0 was expanded by 4MB
00:06:16.878  EAL: Calling mem event callback 'spdk:(nil)'
00:06:16.878  EAL: request: mp_malloc_sync
00:06:16.878  EAL: No shared files mode enabled, IPC is disabled
00:06:16.878  EAL: Heap on socket 0 was shrunk by 4MB
00:06:16.878  EAL: Trying to obtain current memory policy.
00:06:16.878  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:16.878  EAL: Restoring previous memory policy: 4
00:06:16.878  EAL: Calling mem event callback 'spdk:(nil)'
00:06:16.878  EAL: request: mp_malloc_sync
00:06:16.878  EAL: No shared files mode enabled, IPC is disabled
00:06:16.878  EAL: Heap on socket 0 was expanded by 6MB
00:06:16.878  EAL: Calling mem event callback 'spdk:(nil)'
00:06:16.878  EAL: request: mp_malloc_sync
00:06:16.878  EAL: No shared files mode enabled, IPC is disabled
00:06:16.878  EAL: Heap on socket 0 was shrunk by 6MB
00:06:16.879  EAL: Trying to obtain current memory policy.
00:06:16.879  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:16.879  EAL: Restoring previous memory policy: 4
00:06:16.879  EAL: Calling mem event callback 'spdk:(nil)'
00:06:16.879  EAL: request: mp_malloc_sync
00:06:16.879  EAL: No shared files mode enabled, IPC is disabled
00:06:16.879  EAL: Heap on socket 0 was expanded by 10MB
00:06:16.879  EAL: Calling mem event callback 'spdk:(nil)'
00:06:16.879  EAL: request: mp_malloc_sync
00:06:16.879  EAL: No shared files mode enabled, IPC is disabled
00:06:16.879  EAL: Heap on socket 0 was shrunk by 10MB
00:06:16.879  EAL: Trying to obtain current memory policy.
00:06:16.879  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:16.879  EAL: Restoring previous memory policy: 4
00:06:16.879  EAL: Calling mem event callback 'spdk:(nil)'
00:06:16.879  EAL: request: mp_malloc_sync
00:06:16.879  EAL: No shared files mode enabled, IPC is disabled
00:06:16.879  EAL: Heap on socket 0 was expanded by 18MB
00:06:16.879  EAL: Calling mem event callback 'spdk:(nil)'
00:06:16.879  EAL: request: mp_malloc_sync
00:06:16.879  EAL: No shared files mode enabled, IPC is disabled
00:06:16.879  EAL: Heap on socket 0 was shrunk by 18MB
00:06:16.879  EAL: Trying to obtain current memory policy.
00:06:16.879  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:16.879  EAL: Restoring previous memory policy: 4
00:06:16.879  EAL: Calling mem event callback 'spdk:(nil)'
00:06:16.879  EAL: request: mp_malloc_sync
00:06:16.879  EAL: No shared files mode enabled, IPC is disabled
00:06:16.879  EAL: Heap on socket 0 was expanded by 34MB
00:06:16.879  EAL: Calling mem event callback 'spdk:(nil)'
00:06:16.879  EAL: request: mp_malloc_sync
00:06:16.879  EAL: No shared files mode enabled, IPC is disabled
00:06:16.879  EAL: Heap on socket 0 was shrunk by 34MB
00:06:17.138  EAL: Trying to obtain current memory policy.
00:06:17.138  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:17.138  EAL: Restoring previous memory policy: 4
00:06:17.138  EAL: Calling mem event callback 'spdk:(nil)'
00:06:17.138  EAL: request: mp_malloc_sync
00:06:17.138  EAL: No shared files mode enabled, IPC is disabled
00:06:17.138  EAL: Heap on socket 0 was expanded by 66MB
00:06:17.138  EAL: Calling mem event callback 'spdk:(nil)'
00:06:17.138  EAL: request: mp_malloc_sync
00:06:17.138  EAL: No shared files mode enabled, IPC is disabled
00:06:17.138  EAL: Heap on socket 0 was shrunk by 66MB
00:06:17.398  EAL: Trying to obtain current memory policy.
00:06:17.398  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:17.398  EAL: Restoring previous memory policy: 4
00:06:17.398  EAL: Calling mem event callback 'spdk:(nil)'
00:06:17.398  EAL: request: mp_malloc_sync
00:06:17.398  EAL: No shared files mode enabled, IPC is disabled
00:06:17.398  EAL: Heap on socket 0 was expanded by 130MB
00:06:17.398  EAL: Calling mem event callback 'spdk:(nil)'
00:06:17.658  EAL: request: mp_malloc_sync
00:06:17.658  EAL: No shared files mode enabled, IPC is disabled
00:06:17.658  EAL: Heap on socket 0 was shrunk by 130MB
00:06:17.658  EAL: Trying to obtain current memory policy.
00:06:17.658  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:17.917  EAL: Restoring previous memory policy: 4
00:06:17.917  EAL: Calling mem event callback 'spdk:(nil)'
00:06:17.917  EAL: request: mp_malloc_sync
00:06:17.917  EAL: No shared files mode enabled, IPC is disabled
00:06:17.917  EAL: Heap on socket 0 was expanded by 258MB
00:06:18.176  EAL: Calling mem event callback 'spdk:(nil)'
00:06:18.176  EAL: request: mp_malloc_sync
00:06:18.176  EAL: No shared files mode enabled, IPC is disabled
00:06:18.176  EAL: Heap on socket 0 was shrunk by 258MB
00:06:18.746  EAL: Trying to obtain current memory policy.
00:06:18.746  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:18.746  EAL: Restoring previous memory policy: 4
00:06:18.746  EAL: Calling mem event callback 'spdk:(nil)'
00:06:18.746  EAL: request: mp_malloc_sync
00:06:18.746  EAL: No shared files mode enabled, IPC is disabled
00:06:18.746  EAL: Heap on socket 0 was expanded by 514MB
00:06:19.684  EAL: Calling mem event callback 'spdk:(nil)'
00:06:19.684  EAL: request: mp_malloc_sync
00:06:19.684  EAL: No shared files mode enabled, IPC is disabled
00:06:19.684  EAL: Heap on socket 0 was shrunk by 514MB
00:06:20.623  EAL: Trying to obtain current memory policy.
00:06:20.623  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:20.883  EAL: Restoring previous memory policy: 4
00:06:20.883  EAL: Calling mem event callback 'spdk:(nil)'
00:06:20.883  EAL: request: mp_malloc_sync
00:06:20.883  EAL: No shared files mode enabled, IPC is disabled
00:06:20.883  EAL: Heap on socket 0 was expanded by 1026MB
00:06:22.791  EAL: Calling mem event callback 'spdk:(nil)'
00:06:22.791  EAL: request: mp_malloc_sync
00:06:22.791  EAL: No shared files mode enabled, IPC is disabled
00:06:22.791  EAL: Heap on socket 0 was shrunk by 1026MB
00:06:24.700  passed
00:06:24.700  
00:06:24.700  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:24.700                suites      1      1    n/a      0        0
00:06:24.700                 tests      2      2      2      0        0
00:06:24.700               asserts   5705   5705   5705      0      n/a
00:06:24.700  
00:06:24.700  Elapsed time =    7.957 seconds
00:06:24.700  EAL: Calling mem event callback 'spdk:(nil)'
00:06:24.700  EAL: request: mp_malloc_sync
00:06:24.700  EAL: No shared files mode enabled, IPC is disabled
00:06:24.700  EAL: Heap on socket 0 was shrunk by 2MB
00:06:24.700  EAL: No shared files mode enabled, IPC is disabled
00:06:24.700  EAL: No shared files mode enabled, IPC is disabled
00:06:24.700  EAL: No shared files mode enabled, IPC is disabled
00:06:24.700  
00:06:24.700  real	0m8.301s
00:06:24.700  user	0m7.286s
00:06:24.700  sys	0m0.858s
00:06:24.700   16:16:53 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:24.700   16:16:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:06:24.700  ************************************
00:06:24.700  END TEST env_vtophys
00:06:24.700  ************************************
00:06:24.700   16:16:53 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:06:24.700   16:16:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:24.700   16:16:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:24.700   16:16:53 env -- common/autotest_common.sh@10 -- # set +x
00:06:24.700  ************************************
00:06:24.700  START TEST env_pci
00:06:24.700  ************************************
00:06:24.700   16:16:53 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:06:24.700  
00:06:24.700  
00:06:24.700       CUnit - A unit testing framework for C - Version 2.1-3
00:06:24.700       http://cunit.sourceforge.net/
00:06:24.700  
00:06:24.700  
00:06:24.701  Suite: pci
00:06:24.701    Test: pci_hook ...[2024-12-09 16:16:53.610832] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58810 has claimed it
00:06:24.701  passed
00:06:24.701  
00:06:24.701  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:24.701                suites      1      1    n/a      0        0
00:06:24.701                 tests      1      1      1      0        0
00:06:24.701               asserts     25     25     25      0      n/a
00:06:24.701  
00:06:24.701  Elapsed time =    0.008 seconds
00:06:24.701  EAL: Cannot find device (10000:00:01.0)
00:06:24.701  EAL: Failed to attach device on primary process
00:06:24.701  
00:06:24.701  real	0m0.113s
00:06:24.701  user	0m0.040s
00:06:24.701  sys	0m0.072s
00:06:24.701   16:16:53 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:24.701   16:16:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:06:24.701  ************************************
00:06:24.701  END TEST env_pci
00:06:24.701  ************************************
00:06:24.701   16:16:53 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:06:24.701    16:16:53 env -- env/env.sh@15 -- # uname
00:06:24.701   16:16:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:06:24.701   16:16:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:06:24.701   16:16:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:24.701   16:16:53 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:06:24.701   16:16:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:24.701   16:16:53 env -- common/autotest_common.sh@10 -- # set +x
00:06:24.701  ************************************
00:06:24.701  START TEST env_dpdk_post_init
00:06:24.701  ************************************
00:06:24.701   16:16:53 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:24.701  EAL: Detected CPU lcores: 10
00:06:24.701  EAL: Detected NUMA nodes: 1
00:06:24.701  EAL: Detected shared linkage of DPDK
00:06:24.701  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:24.701  EAL: Selected IOVA mode 'PA'
00:06:24.961  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:24.961  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1)
00:06:24.961  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1)
00:06:24.961  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1)
00:06:24.961  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1)
00:06:24.961  Starting DPDK initialization...
00:06:24.961  Starting SPDK post initialization...
00:06:24.961  SPDK NVMe probe
00:06:24.961  Attaching to 0000:00:10.0
00:06:24.961  Attaching to 0000:00:11.0
00:06:24.961  Attaching to 0000:00:12.0
00:06:24.961  Attaching to 0000:00:13.0
00:06:24.961  Attached to 0000:00:10.0
00:06:24.961  Attached to 0000:00:11.0
00:06:24.961  Attached to 0000:00:13.0
00:06:24.961  Attached to 0000:00:12.0
00:06:24.961  Cleaning up...
00:06:24.961  
00:06:24.961  real	0m0.324s
00:06:24.961  user	0m0.101s
00:06:24.961  sys	0m0.126s
00:06:24.961   16:16:54 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:24.961   16:16:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:06:24.961  ************************************
00:06:24.961  END TEST env_dpdk_post_init
00:06:24.961  ************************************
00:06:25.221    16:16:54 env -- env/env.sh@26 -- # uname
00:06:25.221   16:16:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:06:25.221   16:16:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:06:25.221   16:16:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:25.221   16:16:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:25.221   16:16:54 env -- common/autotest_common.sh@10 -- # set +x
00:06:25.221  ************************************
00:06:25.221  START TEST env_mem_callbacks
00:06:25.221  ************************************
00:06:25.221   16:16:54 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:06:25.221  EAL: Detected CPU lcores: 10
00:06:25.221  EAL: Detected NUMA nodes: 1
00:06:25.221  EAL: Detected shared linkage of DPDK
00:06:25.221  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:25.221  EAL: Selected IOVA mode 'PA'
00:06:25.221  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:25.221  
00:06:25.221  
00:06:25.221       CUnit - A unit testing framework for C - Version 2.1-3
00:06:25.221       http://cunit.sourceforge.net/
00:06:25.221  
00:06:25.221  
00:06:25.221  Suite: memory
00:06:25.221    Test: test ...
00:06:25.221  register 0x200000200000 2097152
00:06:25.221  malloc 3145728
00:06:25.221  register 0x200000400000 4194304
00:06:25.221  buf 0x2000004fffc0 len 3145728 PASSED
00:06:25.221  malloc 64
00:06:25.221  buf 0x2000004ffec0 len 64 PASSED
00:06:25.221  malloc 4194304
00:06:25.221  register 0x200000800000 6291456
00:06:25.221  buf 0x2000009fffc0 len 4194304 PASSED
00:06:25.221  free 0x2000004fffc0 3145728
00:06:25.221  free 0x2000004ffec0 64
00:06:25.221  unregister 0x200000400000 4194304 PASSED
00:06:25.482  free 0x2000009fffc0 4194304
00:06:25.482  unregister 0x200000800000 6291456 PASSED
00:06:25.482  malloc 8388608
00:06:25.482  register 0x200000400000 10485760
00:06:25.482  buf 0x2000005fffc0 len 8388608 PASSED
00:06:25.482  free 0x2000005fffc0 8388608
00:06:25.482  unregister 0x200000400000 10485760 PASSED
00:06:25.482  passed
00:06:25.482  
00:06:25.482  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:25.482                suites      1      1    n/a      0        0
00:06:25.482                 tests      1      1      1      0        0
00:06:25.482               asserts     15     15     15      0      n/a
00:06:25.482  
00:06:25.482  Elapsed time =    0.082 seconds
00:06:25.482  
00:06:25.482  real	0m0.292s
00:06:25.482  user	0m0.106s
00:06:25.482  sys	0m0.084s
00:06:25.482   16:16:54 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:25.482   16:16:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:06:25.482  ************************************
00:06:25.482  END TEST env_mem_callbacks
00:06:25.482  ************************************
00:06:25.482  
00:06:25.482  real	0m9.943s
00:06:25.482  user	0m8.035s
00:06:25.482  sys	0m1.533s
00:06:25.482   16:16:54 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:25.482   16:16:54 env -- common/autotest_common.sh@10 -- # set +x
00:06:25.482  ************************************
00:06:25.482  END TEST env
00:06:25.482  ************************************
00:06:25.482   16:16:54  -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:06:25.482   16:16:54  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:25.482   16:16:54  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:25.482   16:16:54  -- common/autotest_common.sh@10 -- # set +x
00:06:25.482  ************************************
00:06:25.482  START TEST rpc
00:06:25.482  ************************************
00:06:25.482   16:16:54 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:06:25.742  * Looking for test storage...
00:06:25.742  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:06:25.742    16:16:54 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:25.742     16:16:54 rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:06:25.742     16:16:54 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:25.742    16:16:54 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:25.742    16:16:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:25.742    16:16:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:25.742    16:16:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:25.742    16:16:54 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:25.742    16:16:54 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:25.742    16:16:54 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:25.742    16:16:54 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:25.742    16:16:54 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:25.742    16:16:54 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:25.742    16:16:54 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:25.742    16:16:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:25.742    16:16:54 rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:25.742    16:16:54 rpc -- scripts/common.sh@345 -- # : 1
00:06:25.742    16:16:54 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:25.742    16:16:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:25.742     16:16:54 rpc -- scripts/common.sh@365 -- # decimal 1
00:06:25.742     16:16:54 rpc -- scripts/common.sh@353 -- # local d=1
00:06:25.742     16:16:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:25.742     16:16:54 rpc -- scripts/common.sh@355 -- # echo 1
00:06:25.742    16:16:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:25.742     16:16:54 rpc -- scripts/common.sh@366 -- # decimal 2
00:06:25.742     16:16:54 rpc -- scripts/common.sh@353 -- # local d=2
00:06:25.742     16:16:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:25.742     16:16:54 rpc -- scripts/common.sh@355 -- # echo 2
00:06:25.742    16:16:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:25.742    16:16:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:25.742    16:16:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:25.742    16:16:54 rpc -- scripts/common.sh@368 -- # return 0
00:06:25.742    16:16:54 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:25.742    16:16:54 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:25.742  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:25.742  		--rc genhtml_branch_coverage=1
00:06:25.742  		--rc genhtml_function_coverage=1
00:06:25.742  		--rc genhtml_legend=1
00:06:25.742  		--rc geninfo_all_blocks=1
00:06:25.742  		--rc geninfo_unexecuted_blocks=1
00:06:25.742  		
00:06:25.742  		'
00:06:25.742    16:16:54 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:25.742  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:25.742  		--rc genhtml_branch_coverage=1
00:06:25.742  		--rc genhtml_function_coverage=1
00:06:25.742  		--rc genhtml_legend=1
00:06:25.742  		--rc geninfo_all_blocks=1
00:06:25.742  		--rc geninfo_unexecuted_blocks=1
00:06:25.742  		
00:06:25.742  		'
00:06:25.742    16:16:54 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:25.742  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:25.742  		--rc genhtml_branch_coverage=1
00:06:25.742  		--rc genhtml_function_coverage=1
00:06:25.742  		--rc genhtml_legend=1
00:06:25.742  		--rc geninfo_all_blocks=1
00:06:25.742  		--rc geninfo_unexecuted_blocks=1
00:06:25.742  		
00:06:25.742  		'
00:06:25.742    16:16:54 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:25.742  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:25.742  		--rc genhtml_branch_coverage=1
00:06:25.742  		--rc genhtml_function_coverage=1
00:06:25.742  		--rc genhtml_legend=1
00:06:25.742  		--rc geninfo_all_blocks=1
00:06:25.742  		--rc geninfo_unexecuted_blocks=1
00:06:25.742  		
00:06:25.742  		'
00:06:25.742   16:16:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58937
00:06:25.742   16:16:54 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:06:25.742   16:16:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:25.742   16:16:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58937
00:06:25.742   16:16:54 rpc -- common/autotest_common.sh@835 -- # '[' -z 58937 ']'
00:06:25.742   16:16:54 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:25.742   16:16:54 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:25.742  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:25.742   16:16:54 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:25.742   16:16:54 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:25.742   16:16:54 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:26.003  [2024-12-09 16:16:54.950202] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:06:26.004  [2024-12-09 16:16:54.950323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58937 ]
00:06:26.004  [2024-12-09 16:16:55.134140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:26.265  [2024-12-09 16:16:55.247154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:06:26.265  [2024-12-09 16:16:55.247211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58937' to capture a snapshot of events at runtime.
00:06:26.265  [2024-12-09 16:16:55.247241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:06:26.265  [2024-12-09 16:16:55.247255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:06:26.265  [2024-12-09 16:16:55.247266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58937 for offline analysis/debug.
00:06:26.265  [2024-12-09 16:16:55.248580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:27.203   16:16:56 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:27.204   16:16:56 rpc -- common/autotest_common.sh@868 -- # return 0
00:06:27.204   16:16:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:06:27.204   16:16:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:06:27.204   16:16:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:06:27.204   16:16:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:06:27.204   16:16:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:27.204   16:16:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:27.204   16:16:56 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:27.204  ************************************
00:06:27.204  START TEST rpc_integrity
00:06:27.204  ************************************
00:06:27.204   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:06:27.204    16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.204   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:27.204    16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:06:27.204   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:27.204    16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.204   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:06:27.204    16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.204   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:06:27.204  {
00:06:27.204  "name": "Malloc0",
00:06:27.204  "aliases": [
00:06:27.204  "863982d6-5e3f-4d98-9dfe-40829fd81c96"
00:06:27.204  ],
00:06:27.204  "product_name": "Malloc disk",
00:06:27.204  "block_size": 512,
00:06:27.204  "num_blocks": 16384,
00:06:27.204  "uuid": "863982d6-5e3f-4d98-9dfe-40829fd81c96",
00:06:27.204  "assigned_rate_limits": {
00:06:27.204  "rw_ios_per_sec": 0,
00:06:27.204  "rw_mbytes_per_sec": 0,
00:06:27.204  "r_mbytes_per_sec": 0,
00:06:27.204  "w_mbytes_per_sec": 0
00:06:27.204  },
00:06:27.204  "claimed": false,
00:06:27.204  "zoned": false,
00:06:27.204  "supported_io_types": {
00:06:27.204  "read": true,
00:06:27.204  "write": true,
00:06:27.204  "unmap": true,
00:06:27.204  "flush": true,
00:06:27.204  "reset": true,
00:06:27.204  "nvme_admin": false,
00:06:27.204  "nvme_io": false,
00:06:27.204  "nvme_io_md": false,
00:06:27.204  "write_zeroes": true,
00:06:27.204  "zcopy": true,
00:06:27.204  "get_zone_info": false,
00:06:27.204  "zone_management": false,
00:06:27.204  "zone_append": false,
00:06:27.204  "compare": false,
00:06:27.204  "compare_and_write": false,
00:06:27.204  "abort": true,
00:06:27.204  "seek_hole": false,
00:06:27.204  "seek_data": false,
00:06:27.204  "copy": true,
00:06:27.204  "nvme_iov_md": false
00:06:27.204  },
00:06:27.204  "memory_domains": [
00:06:27.204  {
00:06:27.204  "dma_device_id": "system",
00:06:27.204  "dma_device_type": 1
00:06:27.204  },
00:06:27.204  {
00:06:27.204  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:27.204  "dma_device_type": 2
00:06:27.204  }
00:06:27.204  ],
00:06:27.204  "driver_specific": {}
00:06:27.204  }
00:06:27.204  ]'
00:06:27.204    16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:06:27.204   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:27.204   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:06:27.204   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.204   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:27.204  [2024-12-09 16:16:56.289167] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:06:27.204  [2024-12-09 16:16:56.289225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:27.204  [2024-12-09 16:16:56.289255] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:06:27.204  [2024-12-09 16:16:56.289272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:27.204  [2024-12-09 16:16:56.291758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:27.204  [2024-12-09 16:16:56.291804] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:27.204  Passthru0
00:06:27.204   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.204    16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:27.204    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.204   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:06:27.204  {
00:06:27.204  "name": "Malloc0",
00:06:27.204  "aliases": [
00:06:27.204  "863982d6-5e3f-4d98-9dfe-40829fd81c96"
00:06:27.204  ],
00:06:27.204  "product_name": "Malloc disk",
00:06:27.204  "block_size": 512,
00:06:27.204  "num_blocks": 16384,
00:06:27.204  "uuid": "863982d6-5e3f-4d98-9dfe-40829fd81c96",
00:06:27.204  "assigned_rate_limits": {
00:06:27.204  "rw_ios_per_sec": 0,
00:06:27.204  "rw_mbytes_per_sec": 0,
00:06:27.204  "r_mbytes_per_sec": 0,
00:06:27.204  "w_mbytes_per_sec": 0
00:06:27.204  },
00:06:27.204  "claimed": true,
00:06:27.204  "claim_type": "exclusive_write",
00:06:27.204  "zoned": false,
00:06:27.204  "supported_io_types": {
00:06:27.204  "read": true,
00:06:27.204  "write": true,
00:06:27.204  "unmap": true,
00:06:27.204  "flush": true,
00:06:27.204  "reset": true,
00:06:27.204  "nvme_admin": false,
00:06:27.204  "nvme_io": false,
00:06:27.204  "nvme_io_md": false,
00:06:27.204  "write_zeroes": true,
00:06:27.204  "zcopy": true,
00:06:27.204  "get_zone_info": false,
00:06:27.204  "zone_management": false,
00:06:27.204  "zone_append": false,
00:06:27.204  "compare": false,
00:06:27.204  "compare_and_write": false,
00:06:27.204  "abort": true,
00:06:27.204  "seek_hole": false,
00:06:27.204  "seek_data": false,
00:06:27.204  "copy": true,
00:06:27.204  "nvme_iov_md": false
00:06:27.204  },
00:06:27.204  "memory_domains": [
00:06:27.204  {
00:06:27.204  "dma_device_id": "system",
00:06:27.204  "dma_device_type": 1
00:06:27.204  },
00:06:27.204  {
00:06:27.204  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:27.204  "dma_device_type": 2
00:06:27.204  }
00:06:27.204  ],
00:06:27.204  "driver_specific": {}
00:06:27.204  },
00:06:27.204  {
00:06:27.204  "name": "Passthru0",
00:06:27.204  "aliases": [
00:06:27.204  "d43be238-149c-587f-84d4-72e9b1ad710d"
00:06:27.204  ],
00:06:27.204  "product_name": "passthru",
00:06:27.204  "block_size": 512,
00:06:27.204  "num_blocks": 16384,
00:06:27.204  "uuid": "d43be238-149c-587f-84d4-72e9b1ad710d",
00:06:27.204  "assigned_rate_limits": {
00:06:27.204  "rw_ios_per_sec": 0,
00:06:27.204  "rw_mbytes_per_sec": 0,
00:06:27.204  "r_mbytes_per_sec": 0,
00:06:27.204  "w_mbytes_per_sec": 0
00:06:27.204  },
00:06:27.204  "claimed": false,
00:06:27.204  "zoned": false,
00:06:27.204  "supported_io_types": {
00:06:27.204  "read": true,
00:06:27.204  "write": true,
00:06:27.204  "unmap": true,
00:06:27.204  "flush": true,
00:06:27.204  "reset": true,
00:06:27.204  "nvme_admin": false,
00:06:27.204  "nvme_io": false,
00:06:27.204  "nvme_io_md": false,
00:06:27.204  "write_zeroes": true,
00:06:27.204  "zcopy": true,
00:06:27.204  "get_zone_info": false,
00:06:27.204  "zone_management": false,
00:06:27.204  "zone_append": false,
00:06:27.204  "compare": false,
00:06:27.204  "compare_and_write": false,
00:06:27.204  "abort": true,
00:06:27.204  "seek_hole": false,
00:06:27.204  "seek_data": false,
00:06:27.204  "copy": true,
00:06:27.204  "nvme_iov_md": false
00:06:27.204  },
00:06:27.204  "memory_domains": [
00:06:27.204  {
00:06:27.204  "dma_device_id": "system",
00:06:27.204  "dma_device_type": 1
00:06:27.204  },
00:06:27.204  {
00:06:27.204  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:27.204  "dma_device_type": 2
00:06:27.204  }
00:06:27.204  ],
00:06:27.204  "driver_specific": {
00:06:27.204  "passthru": {
00:06:27.204  "name": "Passthru0",
00:06:27.204  "base_bdev_name": "Malloc0"
00:06:27.204  }
00:06:27.204  }
00:06:27.204  }
00:06:27.204  ]'
00:06:27.204    16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:06:27.204   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:27.204   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:27.204   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.464   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:27.464   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.464   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:06:27.464   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.464   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:27.464   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.464    16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:27.464    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.464    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:27.464    16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.464   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:27.464    16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:06:27.464   16:16:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:27.464  
00:06:27.464  real	0m0.349s
00:06:27.464  user	0m0.186s
00:06:27.464  sys	0m0.067s
00:06:27.464   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:27.464   16:16:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:27.464  ************************************
00:06:27.464  END TEST rpc_integrity
00:06:27.464  ************************************
00:06:27.464   16:16:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:06:27.464   16:16:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:27.464   16:16:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:27.464   16:16:56 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:27.464  ************************************
00:06:27.464  START TEST rpc_plugins
00:06:27.464  ************************************
00:06:27.464   16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:06:27.464    16:16:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:06:27.464    16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.464    16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:27.464    16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.464   16:16:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:06:27.464    16:16:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:06:27.464    16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.464    16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:27.464    16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.464   16:16:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:06:27.464  {
00:06:27.464  "name": "Malloc1",
00:06:27.464  "aliases": [
00:06:27.464  "c9896014-0d43-4bf6-a2e1-8ecadf7ca357"
00:06:27.464  ],
00:06:27.464  "product_name": "Malloc disk",
00:06:27.464  "block_size": 4096,
00:06:27.464  "num_blocks": 256,
00:06:27.464  "uuid": "c9896014-0d43-4bf6-a2e1-8ecadf7ca357",
00:06:27.464  "assigned_rate_limits": {
00:06:27.464  "rw_ios_per_sec": 0,
00:06:27.464  "rw_mbytes_per_sec": 0,
00:06:27.464  "r_mbytes_per_sec": 0,
00:06:27.464  "w_mbytes_per_sec": 0
00:06:27.464  },
00:06:27.464  "claimed": false,
00:06:27.464  "zoned": false,
00:06:27.464  "supported_io_types": {
00:06:27.464  "read": true,
00:06:27.464  "write": true,
00:06:27.464  "unmap": true,
00:06:27.464  "flush": true,
00:06:27.464  "reset": true,
00:06:27.464  "nvme_admin": false,
00:06:27.464  "nvme_io": false,
00:06:27.464  "nvme_io_md": false,
00:06:27.464  "write_zeroes": true,
00:06:27.464  "zcopy": true,
00:06:27.464  "get_zone_info": false,
00:06:27.464  "zone_management": false,
00:06:27.464  "zone_append": false,
00:06:27.464  "compare": false,
00:06:27.464  "compare_and_write": false,
00:06:27.464  "abort": true,
00:06:27.464  "seek_hole": false,
00:06:27.464  "seek_data": false,
00:06:27.464  "copy": true,
00:06:27.464  "nvme_iov_md": false
00:06:27.464  },
00:06:27.464  "memory_domains": [
00:06:27.464  {
00:06:27.464  "dma_device_id": "system",
00:06:27.464  "dma_device_type": 1
00:06:27.464  },
00:06:27.464  {
00:06:27.464  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:27.464  "dma_device_type": 2
00:06:27.464  }
00:06:27.464  ],
00:06:27.464  "driver_specific": {}
00:06:27.464  }
00:06:27.464  ]'
00:06:27.464    16:16:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:06:27.464   16:16:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:06:27.464   16:16:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:06:27.464   16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.724   16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:27.724   16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.724    16:16:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:06:27.724    16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.724    16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:27.724    16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.724   16:16:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:06:27.724    16:16:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:06:27.724   16:16:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:06:27.724  
00:06:27.724  real	0m0.166s
00:06:27.724  user	0m0.095s
00:06:27.724  sys	0m0.029s
00:06:27.724   16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:27.724   16:16:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:27.724  ************************************
00:06:27.724  END TEST rpc_plugins
00:06:27.724  ************************************
00:06:27.724   16:16:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:06:27.724   16:16:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:27.724   16:16:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:27.724   16:16:56 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:27.724  ************************************
00:06:27.724  START TEST rpc_trace_cmd_test
00:06:27.724  ************************************
00:06:27.724   16:16:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:06:27.724   16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:06:27.724    16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:06:27.724    16:16:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.724    16:16:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:06:27.724    16:16:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.724   16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:06:27.724  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58937",
00:06:27.724  "tpoint_group_mask": "0x8",
00:06:27.724  "iscsi_conn": {
00:06:27.724  "mask": "0x2",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "scsi": {
00:06:27.724  "mask": "0x4",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "bdev": {
00:06:27.724  "mask": "0x8",
00:06:27.724  "tpoint_mask": "0xffffffffffffffff"
00:06:27.724  },
00:06:27.724  "nvmf_rdma": {
00:06:27.724  "mask": "0x10",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "nvmf_tcp": {
00:06:27.724  "mask": "0x20",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "ftl": {
00:06:27.724  "mask": "0x40",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "blobfs": {
00:06:27.724  "mask": "0x80",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "dsa": {
00:06:27.724  "mask": "0x200",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "thread": {
00:06:27.724  "mask": "0x400",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "nvme_pcie": {
00:06:27.724  "mask": "0x800",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "iaa": {
00:06:27.724  "mask": "0x1000",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "nvme_tcp": {
00:06:27.724  "mask": "0x2000",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "bdev_nvme": {
00:06:27.724  "mask": "0x4000",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "sock": {
00:06:27.724  "mask": "0x8000",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "blob": {
00:06:27.724  "mask": "0x10000",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "bdev_raid": {
00:06:27.724  "mask": "0x20000",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  },
00:06:27.724  "scheduler": {
00:06:27.724  "mask": "0x40000",
00:06:27.724  "tpoint_mask": "0x0"
00:06:27.724  }
00:06:27.724  }'
00:06:27.724    16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:06:27.724   16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:06:27.724    16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:06:27.984   16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:06:27.984    16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:06:27.984   16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:06:27.984    16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:06:27.984   16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:06:27.984    16:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:06:27.984   16:16:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:06:27.984  
00:06:27.984  real	0m0.236s
00:06:27.984  user	0m0.185s
00:06:27.984  sys	0m0.043s
00:06:27.984   16:16:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:27.984   16:16:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:06:27.984  ************************************
00:06:27.984  END TEST rpc_trace_cmd_test
00:06:27.984  ************************************
00:06:27.984   16:16:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:06:27.984   16:16:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:06:27.984   16:16:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:06:27.984   16:16:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:27.984   16:16:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:27.984   16:16:57 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:27.984  ************************************
00:06:27.984  START TEST rpc_daemon_integrity
00:06:27.984  ************************************
00:06:27.984   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:06:27.984    16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:27.984    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.984    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:27.984    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.984   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:27.984    16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:06:27.984   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:27.984    16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:27.984    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.984    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:28.244    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:28.244   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:06:28.244    16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:28.244    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:28.244    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:28.244    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:28.244   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:06:28.244  {
00:06:28.244  "name": "Malloc2",
00:06:28.244  "aliases": [
00:06:28.244  "4eea72f1-8bb1-4351-be33-24b5f6a0907a"
00:06:28.244  ],
00:06:28.244  "product_name": "Malloc disk",
00:06:28.244  "block_size": 512,
00:06:28.244  "num_blocks": 16384,
00:06:28.244  "uuid": "4eea72f1-8bb1-4351-be33-24b5f6a0907a",
00:06:28.244  "assigned_rate_limits": {
00:06:28.244  "rw_ios_per_sec": 0,
00:06:28.244  "rw_mbytes_per_sec": 0,
00:06:28.244  "r_mbytes_per_sec": 0,
00:06:28.244  "w_mbytes_per_sec": 0
00:06:28.244  },
00:06:28.244  "claimed": false,
00:06:28.244  "zoned": false,
00:06:28.244  "supported_io_types": {
00:06:28.244  "read": true,
00:06:28.244  "write": true,
00:06:28.244  "unmap": true,
00:06:28.244  "flush": true,
00:06:28.244  "reset": true,
00:06:28.244  "nvme_admin": false,
00:06:28.244  "nvme_io": false,
00:06:28.244  "nvme_io_md": false,
00:06:28.244  "write_zeroes": true,
00:06:28.244  "zcopy": true,
00:06:28.244  "get_zone_info": false,
00:06:28.244  "zone_management": false,
00:06:28.244  "zone_append": false,
00:06:28.244  "compare": false,
00:06:28.244  "compare_and_write": false,
00:06:28.244  "abort": true,
00:06:28.244  "seek_hole": false,
00:06:28.244  "seek_data": false,
00:06:28.244  "copy": true,
00:06:28.244  "nvme_iov_md": false
00:06:28.244  },
00:06:28.244  "memory_domains": [
00:06:28.244  {
00:06:28.244  "dma_device_id": "system",
00:06:28.244  "dma_device_type": 1
00:06:28.244  },
00:06:28.244  {
00:06:28.244  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:28.244  "dma_device_type": 2
00:06:28.244  }
00:06:28.244  ],
00:06:28.244  "driver_specific": {}
00:06:28.244  }
00:06:28.244  ]'
00:06:28.244    16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:06:28.244   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:28.244   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:06:28.244   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:28.244   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:28.244  [2024-12-09 16:16:57.252768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:06:28.244  [2024-12-09 16:16:57.252822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:28.244  [2024-12-09 16:16:57.252843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:06:28.244  [2024-12-09 16:16:57.252857] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:28.244  [2024-12-09 16:16:57.255315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:28.244  [2024-12-09 16:16:57.255357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:28.244  Passthru0
00:06:28.244   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:28.244    16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:28.244    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:28.244    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:28.244    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:28.244   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:06:28.244  {
00:06:28.244  "name": "Malloc2",
00:06:28.244  "aliases": [
00:06:28.244  "4eea72f1-8bb1-4351-be33-24b5f6a0907a"
00:06:28.244  ],
00:06:28.244  "product_name": "Malloc disk",
00:06:28.244  "block_size": 512,
00:06:28.244  "num_blocks": 16384,
00:06:28.244  "uuid": "4eea72f1-8bb1-4351-be33-24b5f6a0907a",
00:06:28.244  "assigned_rate_limits": {
00:06:28.244  "rw_ios_per_sec": 0,
00:06:28.244  "rw_mbytes_per_sec": 0,
00:06:28.244  "r_mbytes_per_sec": 0,
00:06:28.244  "w_mbytes_per_sec": 0
00:06:28.244  },
00:06:28.244  "claimed": true,
00:06:28.244  "claim_type": "exclusive_write",
00:06:28.244  "zoned": false,
00:06:28.244  "supported_io_types": {
00:06:28.244  "read": true,
00:06:28.244  "write": true,
00:06:28.244  "unmap": true,
00:06:28.244  "flush": true,
00:06:28.244  "reset": true,
00:06:28.244  "nvme_admin": false,
00:06:28.244  "nvme_io": false,
00:06:28.244  "nvme_io_md": false,
00:06:28.244  "write_zeroes": true,
00:06:28.244  "zcopy": true,
00:06:28.244  "get_zone_info": false,
00:06:28.244  "zone_management": false,
00:06:28.244  "zone_append": false,
00:06:28.244  "compare": false,
00:06:28.244  "compare_and_write": false,
00:06:28.244  "abort": true,
00:06:28.244  "seek_hole": false,
00:06:28.244  "seek_data": false,
00:06:28.244  "copy": true,
00:06:28.244  "nvme_iov_md": false
00:06:28.244  },
00:06:28.244  "memory_domains": [
00:06:28.244  {
00:06:28.244  "dma_device_id": "system",
00:06:28.244  "dma_device_type": 1
00:06:28.244  },
00:06:28.244  {
00:06:28.244  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:28.244  "dma_device_type": 2
00:06:28.244  }
00:06:28.244  ],
00:06:28.244  "driver_specific": {}
00:06:28.244  },
00:06:28.244  {
00:06:28.244  "name": "Passthru0",
00:06:28.244  "aliases": [
00:06:28.244  "d79d4191-0ab0-5808-965f-81aab8e62e3b"
00:06:28.244  ],
00:06:28.244  "product_name": "passthru",
00:06:28.244  "block_size": 512,
00:06:28.244  "num_blocks": 16384,
00:06:28.244  "uuid": "d79d4191-0ab0-5808-965f-81aab8e62e3b",
00:06:28.244  "assigned_rate_limits": {
00:06:28.244  "rw_ios_per_sec": 0,
00:06:28.244  "rw_mbytes_per_sec": 0,
00:06:28.244  "r_mbytes_per_sec": 0,
00:06:28.244  "w_mbytes_per_sec": 0
00:06:28.244  },
00:06:28.244  "claimed": false,
00:06:28.244  "zoned": false,
00:06:28.244  "supported_io_types": {
00:06:28.244  "read": true,
00:06:28.244  "write": true,
00:06:28.244  "unmap": true,
00:06:28.244  "flush": true,
00:06:28.244  "reset": true,
00:06:28.244  "nvme_admin": false,
00:06:28.244  "nvme_io": false,
00:06:28.244  "nvme_io_md": false,
00:06:28.244  "write_zeroes": true,
00:06:28.244  "zcopy": true,
00:06:28.244  "get_zone_info": false,
00:06:28.244  "zone_management": false,
00:06:28.244  "zone_append": false,
00:06:28.244  "compare": false,
00:06:28.244  "compare_and_write": false,
00:06:28.244  "abort": true,
00:06:28.244  "seek_hole": false,
00:06:28.244  "seek_data": false,
00:06:28.244  "copy": true,
00:06:28.244  "nvme_iov_md": false
00:06:28.244  },
00:06:28.244  "memory_domains": [
00:06:28.244  {
00:06:28.244  "dma_device_id": "system",
00:06:28.244  "dma_device_type": 1
00:06:28.244  },
00:06:28.244  {
00:06:28.244  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:28.244  "dma_device_type": 2
00:06:28.244  }
00:06:28.244  ],
00:06:28.244  "driver_specific": {
00:06:28.244  "passthru": {
00:06:28.244  "name": "Passthru0",
00:06:28.244  "base_bdev_name": "Malloc2"
00:06:28.244  }
00:06:28.244  }
00:06:28.244  }
00:06:28.244  ]'
00:06:28.245    16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:06:28.245   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:28.245   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:28.245   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:28.245   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:28.245   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:28.245   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:06:28.245   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:28.245   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:28.245   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:28.245    16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:28.245    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:28.245    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:28.245    16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:28.245   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:28.245    16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:06:28.505   16:16:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:28.505  
00:06:28.505  real	0m0.335s
00:06:28.505  user	0m0.178s
00:06:28.505  sys	0m0.064s
00:06:28.505   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:28.505   16:16:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:28.505  ************************************
00:06:28.505  END TEST rpc_daemon_integrity
00:06:28.505  ************************************
00:06:28.505   16:16:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:06:28.505   16:16:57 rpc -- rpc/rpc.sh@84 -- # killprocess 58937
00:06:28.505   16:16:57 rpc -- common/autotest_common.sh@954 -- # '[' -z 58937 ']'
00:06:28.505   16:16:57 rpc -- common/autotest_common.sh@958 -- # kill -0 58937
00:06:28.505    16:16:57 rpc -- common/autotest_common.sh@959 -- # uname
00:06:28.505   16:16:57 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:28.505    16:16:57 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58937
00:06:28.505   16:16:57 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:28.505   16:16:57 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:28.505   16:16:57 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58937'
00:06:28.505  killing process with pid 58937
00:06:28.505   16:16:57 rpc -- common/autotest_common.sh@973 -- # kill 58937
00:06:28.505   16:16:57 rpc -- common/autotest_common.sh@978 -- # wait 58937
00:06:31.043  
00:06:31.043  real	0m5.292s
00:06:31.043  user	0m5.764s
00:06:31.043  sys	0m0.987s
00:06:31.043   16:16:59 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:31.043   16:16:59 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:31.043  ************************************
00:06:31.043  END TEST rpc
00:06:31.043  ************************************
00:06:31.043   16:16:59  -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:06:31.043   16:16:59  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:31.043   16:16:59  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:31.043   16:16:59  -- common/autotest_common.sh@10 -- # set +x
00:06:31.043  ************************************
00:06:31.043  START TEST skip_rpc
00:06:31.043  ************************************
00:06:31.043   16:16:59 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:06:31.043  * Looking for test storage...
00:06:31.043  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:06:31.043    16:17:00 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:31.043     16:17:00 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:06:31.043     16:17:00 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:31.043    16:17:00 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@345 -- # : 1
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:31.043     16:17:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:06:31.043     16:17:00 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:06:31.043     16:17:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:31.043     16:17:00 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:31.043     16:17:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:06:31.043     16:17:00 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:06:31.043     16:17:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:31.043     16:17:00 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:31.043    16:17:00 skip_rpc -- scripts/common.sh@368 -- # return 0
00:06:31.043    16:17:00 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:31.043    16:17:00 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:31.043  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:31.043  		--rc genhtml_branch_coverage=1
00:06:31.043  		--rc genhtml_function_coverage=1
00:06:31.043  		--rc genhtml_legend=1
00:06:31.043  		--rc geninfo_all_blocks=1
00:06:31.043  		--rc geninfo_unexecuted_blocks=1
00:06:31.043  		
00:06:31.043  		'
00:06:31.043    16:17:00 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:31.043  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:31.043  		--rc genhtml_branch_coverage=1
00:06:31.043  		--rc genhtml_function_coverage=1
00:06:31.043  		--rc genhtml_legend=1
00:06:31.043  		--rc geninfo_all_blocks=1
00:06:31.043  		--rc geninfo_unexecuted_blocks=1
00:06:31.043  		
00:06:31.043  		'
00:06:31.043    16:17:00 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:31.043  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:31.043  		--rc genhtml_branch_coverage=1
00:06:31.043  		--rc genhtml_function_coverage=1
00:06:31.043  		--rc genhtml_legend=1
00:06:31.043  		--rc geninfo_all_blocks=1
00:06:31.043  		--rc geninfo_unexecuted_blocks=1
00:06:31.043  		
00:06:31.043  		'
00:06:31.043    16:17:00 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:31.043  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:31.043  		--rc genhtml_branch_coverage=1
00:06:31.043  		--rc genhtml_function_coverage=1
00:06:31.043  		--rc genhtml_legend=1
00:06:31.043  		--rc geninfo_all_blocks=1
00:06:31.043  		--rc geninfo_unexecuted_blocks=1
00:06:31.043  		
00:06:31.043  		'
00:06:31.043   16:17:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:31.043   16:17:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:06:31.043   16:17:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:06:31.043   16:17:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:31.043   16:17:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:31.043   16:17:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:31.043  ************************************
00:06:31.043  START TEST skip_rpc
00:06:31.043  ************************************
00:06:31.043   16:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:06:31.043   16:17:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59169
00:06:31.043   16:17:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:06:31.043   16:17:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:31.043   16:17:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:06:31.303  [2024-12-09 16:17:00.319079] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:06:31.303  [2024-12-09 16:17:00.319199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59169 ]
00:06:31.562  [2024-12-09 16:17:00.499310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:31.562  [2024-12-09 16:17:00.616882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:36.843   16:17:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:36.844    16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59169
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59169 ']'
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59169
00:06:36.844    16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:36.844    16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59169
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:36.844  killing process with pid 59169
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59169'
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59169
00:06:36.844   16:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59169
00:06:38.753  
00:06:38.753  real	0m7.439s
00:06:38.753  user	0m6.955s
00:06:38.753  sys	0m0.404s
00:06:38.753   16:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:38.753   16:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:38.753  ************************************
00:06:38.753  END TEST skip_rpc
00:06:38.753  ************************************
00:06:38.753   16:17:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:06:38.753   16:17:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:38.753   16:17:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:38.753   16:17:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:38.753  ************************************
00:06:38.753  START TEST skip_rpc_with_json
00:06:38.753  ************************************
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59277
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59277
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59277 ']'
00:06:38.753  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:38.753   16:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:38.753  [2024-12-09 16:17:07.835296] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:06:38.753  [2024-12-09 16:17:07.835414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59277 ]
00:06:39.012  [2024-12-09 16:17:08.017984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:39.012  [2024-12-09 16:17:08.132642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:39.961   16:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:39.961   16:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:06:39.961   16:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:06:39.961   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:39.961   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:39.961  [2024-12-09 16:17:09.007129] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:06:39.961  request:
00:06:39.961  {
00:06:39.961  "trtype": "tcp",
00:06:39.961  "method": "nvmf_get_transports",
00:06:39.961  "req_id": 1
00:06:39.961  }
00:06:39.961  Got JSON-RPC error response
00:06:39.961  response:
00:06:39.961  {
00:06:39.961  "code": -19,
00:06:39.961  "message": "No such device"
00:06:39.961  }
00:06:39.961   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:06:39.961   16:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:06:39.961   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:39.961   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:39.961  [2024-12-09 16:17:09.019222] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:06:39.961   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:39.961   16:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:06:39.961   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:39.961   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:40.220   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:40.220   16:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:40.220  {
00:06:40.220  "subsystems": [
00:06:40.220  {
00:06:40.220  "subsystem": "fsdev",
00:06:40.220  "config": [
00:06:40.220  {
00:06:40.220  "method": "fsdev_set_opts",
00:06:40.220  "params": {
00:06:40.220  "fsdev_io_pool_size": 65535,
00:06:40.220  "fsdev_io_cache_size": 256
00:06:40.220  }
00:06:40.220  }
00:06:40.220  ]
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "keyring",
00:06:40.220  "config": []
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "iobuf",
00:06:40.220  "config": [
00:06:40.220  {
00:06:40.220  "method": "iobuf_set_options",
00:06:40.220  "params": {
00:06:40.220  "small_pool_count": 8192,
00:06:40.220  "large_pool_count": 1024,
00:06:40.220  "small_bufsize": 8192,
00:06:40.220  "large_bufsize": 135168,
00:06:40.220  "enable_numa": false
00:06:40.220  }
00:06:40.220  }
00:06:40.220  ]
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "sock",
00:06:40.220  "config": [
00:06:40.220  {
00:06:40.220  "method": "sock_set_default_impl",
00:06:40.220  "params": {
00:06:40.220  "impl_name": "posix"
00:06:40.220  }
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "method": "sock_impl_set_options",
00:06:40.220  "params": {
00:06:40.220  "impl_name": "ssl",
00:06:40.220  "recv_buf_size": 4096,
00:06:40.220  "send_buf_size": 4096,
00:06:40.220  "enable_recv_pipe": true,
00:06:40.220  "enable_quickack": false,
00:06:40.220  "enable_placement_id": 0,
00:06:40.220  "enable_zerocopy_send_server": true,
00:06:40.220  "enable_zerocopy_send_client": false,
00:06:40.220  "zerocopy_threshold": 0,
00:06:40.220  "tls_version": 0,
00:06:40.220  "enable_ktls": false
00:06:40.220  }
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "method": "sock_impl_set_options",
00:06:40.220  "params": {
00:06:40.220  "impl_name": "posix",
00:06:40.220  "recv_buf_size": 2097152,
00:06:40.220  "send_buf_size": 2097152,
00:06:40.220  "enable_recv_pipe": true,
00:06:40.220  "enable_quickack": false,
00:06:40.220  "enable_placement_id": 0,
00:06:40.220  "enable_zerocopy_send_server": true,
00:06:40.220  "enable_zerocopy_send_client": false,
00:06:40.220  "zerocopy_threshold": 0,
00:06:40.220  "tls_version": 0,
00:06:40.220  "enable_ktls": false
00:06:40.220  }
00:06:40.220  }
00:06:40.220  ]
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "vmd",
00:06:40.220  "config": []
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "accel",
00:06:40.220  "config": [
00:06:40.220  {
00:06:40.220  "method": "accel_set_options",
00:06:40.220  "params": {
00:06:40.220  "small_cache_size": 128,
00:06:40.220  "large_cache_size": 16,
00:06:40.220  "task_count": 2048,
00:06:40.220  "sequence_count": 2048,
00:06:40.220  "buf_count": 2048
00:06:40.220  }
00:06:40.220  }
00:06:40.220  ]
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "bdev",
00:06:40.220  "config": [
00:06:40.220  {
00:06:40.220  "method": "bdev_set_options",
00:06:40.220  "params": {
00:06:40.220  "bdev_io_pool_size": 65535,
00:06:40.220  "bdev_io_cache_size": 256,
00:06:40.220  "bdev_auto_examine": true,
00:06:40.220  "iobuf_small_cache_size": 128,
00:06:40.220  "iobuf_large_cache_size": 16
00:06:40.220  }
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "method": "bdev_raid_set_options",
00:06:40.220  "params": {
00:06:40.220  "process_window_size_kb": 1024,
00:06:40.220  "process_max_bandwidth_mb_sec": 0
00:06:40.220  }
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "method": "bdev_iscsi_set_options",
00:06:40.220  "params": {
00:06:40.220  "timeout_sec": 30
00:06:40.220  }
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "method": "bdev_nvme_set_options",
00:06:40.220  "params": {
00:06:40.220  "action_on_timeout": "none",
00:06:40.220  "timeout_us": 0,
00:06:40.220  "timeout_admin_us": 0,
00:06:40.220  "keep_alive_timeout_ms": 10000,
00:06:40.220  "arbitration_burst": 0,
00:06:40.220  "low_priority_weight": 0,
00:06:40.220  "medium_priority_weight": 0,
00:06:40.220  "high_priority_weight": 0,
00:06:40.220  "nvme_adminq_poll_period_us": 10000,
00:06:40.220  "nvme_ioq_poll_period_us": 0,
00:06:40.220  "io_queue_requests": 0,
00:06:40.220  "delay_cmd_submit": true,
00:06:40.220  "transport_retry_count": 4,
00:06:40.220  "bdev_retry_count": 3,
00:06:40.220  "transport_ack_timeout": 0,
00:06:40.220  "ctrlr_loss_timeout_sec": 0,
00:06:40.220  "reconnect_delay_sec": 0,
00:06:40.220  "fast_io_fail_timeout_sec": 0,
00:06:40.220  "disable_auto_failback": false,
00:06:40.220  "generate_uuids": false,
00:06:40.220  "transport_tos": 0,
00:06:40.220  "nvme_error_stat": false,
00:06:40.220  "rdma_srq_size": 0,
00:06:40.220  "io_path_stat": false,
00:06:40.220  "allow_accel_sequence": false,
00:06:40.220  "rdma_max_cq_size": 0,
00:06:40.220  "rdma_cm_event_timeout_ms": 0,
00:06:40.220  "dhchap_digests": [
00:06:40.220  "sha256",
00:06:40.220  "sha384",
00:06:40.220  "sha512"
00:06:40.220  ],
00:06:40.220  "dhchap_dhgroups": [
00:06:40.220  "null",
00:06:40.220  "ffdhe2048",
00:06:40.220  "ffdhe3072",
00:06:40.220  "ffdhe4096",
00:06:40.220  "ffdhe6144",
00:06:40.220  "ffdhe8192"
00:06:40.220  ]
00:06:40.220  }
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "method": "bdev_nvme_set_hotplug",
00:06:40.220  "params": {
00:06:40.220  "period_us": 100000,
00:06:40.220  "enable": false
00:06:40.220  }
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "method": "bdev_wait_for_examine"
00:06:40.220  }
00:06:40.220  ]
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "scsi",
00:06:40.220  "config": null
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "scheduler",
00:06:40.220  "config": [
00:06:40.220  {
00:06:40.220  "method": "framework_set_scheduler",
00:06:40.220  "params": {
00:06:40.220  "name": "static"
00:06:40.220  }
00:06:40.220  }
00:06:40.220  ]
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "vhost_scsi",
00:06:40.220  "config": []
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "vhost_blk",
00:06:40.220  "config": []
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "ublk",
00:06:40.220  "config": []
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "nbd",
00:06:40.220  "config": []
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "subsystem": "nvmf",
00:06:40.220  "config": [
00:06:40.220  {
00:06:40.220  "method": "nvmf_set_config",
00:06:40.220  "params": {
00:06:40.220  "discovery_filter": "match_any",
00:06:40.220  "admin_cmd_passthru": {
00:06:40.220  "identify_ctrlr": false
00:06:40.220  },
00:06:40.220  "dhchap_digests": [
00:06:40.220  "sha256",
00:06:40.220  "sha384",
00:06:40.220  "sha512"
00:06:40.220  ],
00:06:40.220  "dhchap_dhgroups": [
00:06:40.220  "null",
00:06:40.220  "ffdhe2048",
00:06:40.220  "ffdhe3072",
00:06:40.220  "ffdhe4096",
00:06:40.220  "ffdhe6144",
00:06:40.220  "ffdhe8192"
00:06:40.220  ]
00:06:40.220  }
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "method": "nvmf_set_max_subsystems",
00:06:40.220  "params": {
00:06:40.220  "max_subsystems": 1024
00:06:40.220  }
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "method": "nvmf_set_crdt",
00:06:40.220  "params": {
00:06:40.220  "crdt1": 0,
00:06:40.220  "crdt2": 0,
00:06:40.220  "crdt3": 0
00:06:40.220  }
00:06:40.220  },
00:06:40.220  {
00:06:40.220  "method": "nvmf_create_transport",
00:06:40.220  "params": {
00:06:40.220  "trtype": "TCP",
00:06:40.220  "max_queue_depth": 128,
00:06:40.220  "max_io_qpairs_per_ctrlr": 127,
00:06:40.220  "in_capsule_data_size": 4096,
00:06:40.220  "max_io_size": 131072,
00:06:40.220  "io_unit_size": 131072,
00:06:40.220  "max_aq_depth": 128,
00:06:40.220  "num_shared_buffers": 511,
00:06:40.220  "buf_cache_size": 4294967295,
00:06:40.221  "dif_insert_or_strip": false,
00:06:40.221  "zcopy": false,
00:06:40.221  "c2h_success": true,
00:06:40.221  "sock_priority": 0,
00:06:40.221  "abort_timeout_sec": 1,
00:06:40.221  "ack_timeout": 0,
00:06:40.221  "data_wr_pool_size": 0
00:06:40.221  }
00:06:40.221  }
00:06:40.221  ]
00:06:40.221  },
00:06:40.221  {
00:06:40.221  "subsystem": "iscsi",
00:06:40.221  "config": [
00:06:40.221  {
00:06:40.221  "method": "iscsi_set_options",
00:06:40.221  "params": {
00:06:40.221  "node_base": "iqn.2016-06.io.spdk",
00:06:40.221  "max_sessions": 128,
00:06:40.221  "max_connections_per_session": 2,
00:06:40.221  "max_queue_depth": 64,
00:06:40.221  "default_time2wait": 2,
00:06:40.221  "default_time2retain": 20,
00:06:40.221  "first_burst_length": 8192,
00:06:40.221  "immediate_data": true,
00:06:40.221  "allow_duplicated_isid": false,
00:06:40.221  "error_recovery_level": 0,
00:06:40.221  "nop_timeout": 60,
00:06:40.221  "nop_in_interval": 30,
00:06:40.221  "disable_chap": false,
00:06:40.221  "require_chap": false,
00:06:40.221  "mutual_chap": false,
00:06:40.221  "chap_group": 0,
00:06:40.221  "max_large_datain_per_connection": 64,
00:06:40.221  "max_r2t_per_connection": 4,
00:06:40.221  "pdu_pool_size": 36864,
00:06:40.221  "immediate_data_pool_size": 16384,
00:06:40.221  "data_out_pool_size": 2048
00:06:40.221  }
00:06:40.221  }
00:06:40.221  ]
00:06:40.221  }
00:06:40.221  ]
00:06:40.221  }
00:06:40.221   16:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:06:40.221   16:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59277
00:06:40.221   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59277 ']'
00:06:40.221   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59277
00:06:40.221    16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:06:40.221   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:40.221    16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59277
00:06:40.221   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:40.221  killing process with pid 59277
00:06:40.221   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:40.221   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59277'
00:06:40.221   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59277
00:06:40.221   16:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59277
00:06:42.760   16:17:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59329
00:06:42.760   16:17:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:42.760   16:17:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:06:48.037   16:17:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59329
00:06:48.037   16:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59329 ']'
00:06:48.037   16:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59329
00:06:48.037    16:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:06:48.037   16:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:48.037    16:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59329
00:06:48.037   16:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:48.037  killing process with pid 59329
00:06:48.038   16:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:48.038   16:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59329'
00:06:48.038   16:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59329
00:06:48.038   16:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59329
00:06:49.945   16:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:06:49.945   16:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:06:49.945  
00:06:49.945  real	0m11.325s
00:06:49.945  user	0m10.735s
00:06:49.945  sys	0m0.907s
00:06:49.945   16:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:49.945   16:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:49.945  ************************************
00:06:49.945  END TEST skip_rpc_with_json
00:06:49.945  ************************************
00:06:49.945   16:17:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:06:49.945   16:17:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:49.945   16:17:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:49.945   16:17:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:49.945  ************************************
00:06:49.945  START TEST skip_rpc_with_delay
00:06:49.945  ************************************
00:06:49.945   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:06:49.945   16:17:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:49.945   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:06:49.945   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:49.945   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:50.205    16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:50.205    16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:50.205  [2024-12-09 16:17:19.231469] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:50.205  
00:06:50.205  real	0m0.175s
00:06:50.205  user	0m0.084s
00:06:50.205  sys	0m0.089s
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:50.205  ************************************
00:06:50.205   16:17:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:06:50.205  END TEST skip_rpc_with_delay
00:06:50.205  ************************************
00:06:50.205    16:17:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:06:50.205   16:17:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:06:50.205   16:17:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:06:50.205   16:17:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:50.205   16:17:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:50.205   16:17:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:50.205  ************************************
00:06:50.205  START TEST exit_on_failed_rpc_init
00:06:50.205  ************************************
00:06:50.205   16:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:06:50.205   16:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59468
00:06:50.205   16:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:06:50.205   16:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59468
00:06:50.205   16:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59468 ']'
00:06:50.205   16:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:50.205   16:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:50.205  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:50.205   16:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:50.205   16:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:50.205   16:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:06:50.465  [2024-12-09 16:17:19.478919] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:06:50.465  [2024-12-09 16:17:19.479063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59468 ]
00:06:50.724  [2024-12-09 16:17:19.660965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:50.724  [2024-12-09 16:17:19.771851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:51.663    16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:51.663    16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:06:51.663   16:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:06:51.663  [2024-12-09 16:17:20.740728] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:06:51.663  [2024-12-09 16:17:20.740841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59486 ]
00:06:51.923  [2024-12-09 16:17:20.921015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:51.923  [2024-12-09 16:17:21.035699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:51.923  [2024-12-09 16:17:21.035787] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:06:51.923  [2024-12-09 16:17:21.035803] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:06:51.923  [2024-12-09 16:17:21.035827] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59468
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59468 ']'
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59468
00:06:52.182    16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:52.182    16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59468
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:52.182  killing process with pid 59468
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59468'
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59468
00:06:52.182   16:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59468
00:06:54.721  
00:06:54.721  real	0m4.329s
00:06:54.721  user	0m4.608s
00:06:54.721  sys	0m0.610s
00:06:54.721   16:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:54.721   16:17:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:06:54.721  ************************************
00:06:54.721  END TEST exit_on_failed_rpc_init
00:06:54.721  ************************************
00:06:54.721   16:17:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:54.721  
00:06:54.721  real	0m23.792s
00:06:54.721  user	0m22.606s
00:06:54.721  sys	0m2.318s
00:06:54.721   16:17:23 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:54.721   16:17:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:54.721  ************************************
00:06:54.721  END TEST skip_rpc
00:06:54.721  ************************************
00:06:54.721   16:17:23  -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:06:54.721   16:17:23  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:54.721   16:17:23  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:54.721   16:17:23  -- common/autotest_common.sh@10 -- # set +x
00:06:54.721  ************************************
00:06:54.721  START TEST rpc_client
00:06:54.721  ************************************
00:06:54.721   16:17:23 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:06:54.979  * Looking for test storage...
00:06:54.979  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:06:54.979    16:17:23 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:54.979     16:17:23 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version
00:06:54.979     16:17:23 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:54.979    16:17:24 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:54.979    16:17:24 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:54.979    16:17:24 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:54.979    16:17:24 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:54.979    16:17:24 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:06:54.979    16:17:24 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@345 -- # : 1
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:54.980     16:17:24 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:06:54.980     16:17:24 rpc_client -- scripts/common.sh@353 -- # local d=1
00:06:54.980     16:17:24 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:54.980     16:17:24 rpc_client -- scripts/common.sh@355 -- # echo 1
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:06:54.980     16:17:24 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:06:54.980     16:17:24 rpc_client -- scripts/common.sh@353 -- # local d=2
00:06:54.980     16:17:24 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:54.980     16:17:24 rpc_client -- scripts/common.sh@355 -- # echo 2
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:54.980    16:17:24 rpc_client -- scripts/common.sh@368 -- # return 0
00:06:54.980    16:17:24 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:54.980    16:17:24 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:54.980  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:54.980  		--rc genhtml_branch_coverage=1
00:06:54.980  		--rc genhtml_function_coverage=1
00:06:54.980  		--rc genhtml_legend=1
00:06:54.980  		--rc geninfo_all_blocks=1
00:06:54.980  		--rc geninfo_unexecuted_blocks=1
00:06:54.980  		
00:06:54.980  		'
00:06:54.980    16:17:24 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:54.980  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:54.980  		--rc genhtml_branch_coverage=1
00:06:54.980  		--rc genhtml_function_coverage=1
00:06:54.980  		--rc genhtml_legend=1
00:06:54.980  		--rc geninfo_all_blocks=1
00:06:54.980  		--rc geninfo_unexecuted_blocks=1
00:06:54.980  		
00:06:54.980  		'
00:06:54.980    16:17:24 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:54.980  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:54.980  		--rc genhtml_branch_coverage=1
00:06:54.980  		--rc genhtml_function_coverage=1
00:06:54.980  		--rc genhtml_legend=1
00:06:54.980  		--rc geninfo_all_blocks=1
00:06:54.980  		--rc geninfo_unexecuted_blocks=1
00:06:54.980  		
00:06:54.980  		'
00:06:54.980    16:17:24 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:54.980  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:54.980  		--rc genhtml_branch_coverage=1
00:06:54.980  		--rc genhtml_function_coverage=1
00:06:54.980  		--rc genhtml_legend=1
00:06:54.980  		--rc geninfo_all_blocks=1
00:06:54.980  		--rc geninfo_unexecuted_blocks=1
00:06:54.980  		
00:06:54.980  		'
00:06:54.980   16:17:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:06:54.980  OK
00:06:54.980   16:17:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:06:54.980  
00:06:54.980  real	0m0.303s
00:06:54.980  user	0m0.162s
00:06:54.980  sys	0m0.160s
00:06:54.980   16:17:24 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:54.980   16:17:24 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:06:54.980  ************************************
00:06:54.980  END TEST rpc_client
00:06:54.980  ************************************
00:06:55.240   16:17:24  -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:06:55.240   16:17:24  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:55.240   16:17:24  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:55.240   16:17:24  -- common/autotest_common.sh@10 -- # set +x
00:06:55.240  ************************************
00:06:55.240  START TEST json_config
00:06:55.240  ************************************
00:06:55.240   16:17:24 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:06:55.240    16:17:24 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:55.240     16:17:24 json_config -- common/autotest_common.sh@1711 -- # lcov --version
00:06:55.240     16:17:24 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:55.240    16:17:24 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:55.240    16:17:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:55.240    16:17:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:55.240    16:17:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:55.240    16:17:24 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:06:55.240    16:17:24 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:06:55.240    16:17:24 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:06:55.240    16:17:24 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:06:55.240    16:17:24 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:06:55.240    16:17:24 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:06:55.240    16:17:24 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:06:55.240    16:17:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:55.240    16:17:24 json_config -- scripts/common.sh@344 -- # case "$op" in
00:06:55.240    16:17:24 json_config -- scripts/common.sh@345 -- # : 1
00:06:55.240    16:17:24 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:55.240    16:17:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:55.240     16:17:24 json_config -- scripts/common.sh@365 -- # decimal 1
00:06:55.240     16:17:24 json_config -- scripts/common.sh@353 -- # local d=1
00:06:55.240     16:17:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:55.240     16:17:24 json_config -- scripts/common.sh@355 -- # echo 1
00:06:55.240    16:17:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:06:55.240     16:17:24 json_config -- scripts/common.sh@366 -- # decimal 2
00:06:55.240     16:17:24 json_config -- scripts/common.sh@353 -- # local d=2
00:06:55.240     16:17:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:55.240     16:17:24 json_config -- scripts/common.sh@355 -- # echo 2
00:06:55.240    16:17:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:06:55.240    16:17:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:55.240    16:17:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:55.240    16:17:24 json_config -- scripts/common.sh@368 -- # return 0
00:06:55.240    16:17:24 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:55.240    16:17:24 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:55.240  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.240  		--rc genhtml_branch_coverage=1
00:06:55.240  		--rc genhtml_function_coverage=1
00:06:55.240  		--rc genhtml_legend=1
00:06:55.240  		--rc geninfo_all_blocks=1
00:06:55.240  		--rc geninfo_unexecuted_blocks=1
00:06:55.240  		
00:06:55.240  		'
00:06:55.240    16:17:24 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:55.240  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.240  		--rc genhtml_branch_coverage=1
00:06:55.240  		--rc genhtml_function_coverage=1
00:06:55.240  		--rc genhtml_legend=1
00:06:55.240  		--rc geninfo_all_blocks=1
00:06:55.240  		--rc geninfo_unexecuted_blocks=1
00:06:55.240  		
00:06:55.240  		'
00:06:55.240    16:17:24 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:55.240  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.240  		--rc genhtml_branch_coverage=1
00:06:55.240  		--rc genhtml_function_coverage=1
00:06:55.240  		--rc genhtml_legend=1
00:06:55.240  		--rc geninfo_all_blocks=1
00:06:55.240  		--rc geninfo_unexecuted_blocks=1
00:06:55.240  		
00:06:55.240  		'
00:06:55.240    16:17:24 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:55.240  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.240  		--rc genhtml_branch_coverage=1
00:06:55.240  		--rc genhtml_function_coverage=1
00:06:55.240  		--rc genhtml_legend=1
00:06:55.240  		--rc geninfo_all_blocks=1
00:06:55.240  		--rc geninfo_unexecuted_blocks=1
00:06:55.240  		
00:06:55.240  		'
00:06:55.240   16:17:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:06:55.240     16:17:24 json_config -- nvmf/common.sh@7 -- # uname -s
00:06:55.240    16:17:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:55.240    16:17:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:55.240    16:17:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:55.240    16:17:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:55.240    16:17:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:55.240    16:17:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:55.240    16:17:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:55.240    16:17:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:55.240    16:17:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:55.240     16:17:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:55.500    16:17:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:762461a4-9ecc-4976-86b8-dcd6ce49c43f
00:06:55.500    16:17:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=762461a4-9ecc-4976-86b8-dcd6ce49c43f
00:06:55.500    16:17:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:55.500    16:17:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:55.500    16:17:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:06:55.500    16:17:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:06:55.500    16:17:24 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:06:55.500     16:17:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:06:55.500     16:17:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:55.500     16:17:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:55.500     16:17:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:55.500      16:17:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.500      16:17:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.500      16:17:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.500      16:17:24 json_config -- paths/export.sh@5 -- # export PATH
00:06:55.500      16:17:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.500    16:17:24 json_config -- nvmf/common.sh@51 -- # : 0
00:06:55.500    16:17:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:06:55.500    16:17:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:06:55.501    16:17:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:06:55.501    16:17:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:55.501    16:17:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:55.501    16:17:24 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:06:55.501  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:06:55.501    16:17:24 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:06:55.501    16:17:24 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:06:55.501    16:17:24 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:06:55.501   16:17:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:06:55.501   16:17:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:06:55.501   16:17:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:06:55.501   16:17:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:06:55.501   16:17:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:06:55.501  WARNING: No tests are enabled so not running JSON configuration tests
00:06:55.501   16:17:24 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:06:55.501   16:17:24 json_config -- json_config/json_config.sh@28 -- # exit 0
00:06:55.501  
00:06:55.501  real	0m0.236s
00:06:55.501  user	0m0.148s
00:06:55.501  sys	0m0.089s
00:06:55.501   16:17:24 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:55.501   16:17:24 json_config -- common/autotest_common.sh@10 -- # set +x
00:06:55.501  ************************************
00:06:55.501  END TEST json_config
00:06:55.501  ************************************
00:06:55.501   16:17:24  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:06:55.501   16:17:24  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:55.501   16:17:24  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:55.501   16:17:24  -- common/autotest_common.sh@10 -- # set +x
00:06:55.501  ************************************
00:06:55.501  START TEST json_config_extra_key
00:06:55.501  ************************************
00:06:55.501   16:17:24 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:06:55.501    16:17:24 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:55.501     16:17:24 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version
00:06:55.501     16:17:24 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:55.762    16:17:24 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:55.762    16:17:24 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:06:55.762    16:17:24 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:55.762    16:17:24 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:55.762  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.762  		--rc genhtml_branch_coverage=1
00:06:55.762  		--rc genhtml_function_coverage=1
00:06:55.762  		--rc genhtml_legend=1
00:06:55.762  		--rc geninfo_all_blocks=1
00:06:55.762  		--rc geninfo_unexecuted_blocks=1
00:06:55.762  		
00:06:55.762  		'
00:06:55.762    16:17:24 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:55.762  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.762  		--rc genhtml_branch_coverage=1
00:06:55.762  		--rc genhtml_function_coverage=1
00:06:55.762  		--rc genhtml_legend=1
00:06:55.762  		--rc geninfo_all_blocks=1
00:06:55.762  		--rc geninfo_unexecuted_blocks=1
00:06:55.762  		
00:06:55.762  		'
00:06:55.762    16:17:24 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:55.762  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.762  		--rc genhtml_branch_coverage=1
00:06:55.762  		--rc genhtml_function_coverage=1
00:06:55.762  		--rc genhtml_legend=1
00:06:55.762  		--rc geninfo_all_blocks=1
00:06:55.762  		--rc geninfo_unexecuted_blocks=1
00:06:55.762  		
00:06:55.762  		'
00:06:55.762    16:17:24 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:55.762  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.762  		--rc genhtml_branch_coverage=1
00:06:55.762  		--rc genhtml_function_coverage=1
00:06:55.762  		--rc genhtml_legend=1
00:06:55.762  		--rc geninfo_all_blocks=1
00:06:55.762  		--rc geninfo_unexecuted_blocks=1
00:06:55.762  		
00:06:55.762  		'
00:06:55.762   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:06:55.762     16:17:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:55.762     16:17:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:762461a4-9ecc-4976-86b8-dcd6ce49c43f
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=762461a4-9ecc-4976-86b8-dcd6ce49c43f
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:55.762     16:17:24 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:55.762      16:17:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.762      16:17:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.762      16:17:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.762      16:17:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:06:55.762      16:17:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:55.762    16:17:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:06:55.763  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:06:55.763    16:17:24 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:06:55.763    16:17:24 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:06:55.763    16:17:24 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:06:55.763  INFO: launching applications...
00:06:55.763   16:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:06:55.763   16:17:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:06:55.763   16:17:24 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:06:55.763   16:17:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:06:55.763   16:17:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:06:55.763   16:17:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:06:55.763   16:17:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:06:55.763   16:17:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:06:55.763   16:17:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59696
00:06:55.763   16:17:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:06:55.763  Waiting for target to run...
00:06:55.763   16:17:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59696 /var/tmp/spdk_tgt.sock
00:06:55.763   16:17:24 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:06:55.763   16:17:24 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59696 ']'
00:06:55.763   16:17:24 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:06:55.763   16:17:24 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:55.763   16:17:24 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:06:55.763  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:06:55.763   16:17:24 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:55.763   16:17:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:06:55.763  [2024-12-09 16:17:24.850048] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:06:55.763  [2024-12-09 16:17:24.850859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59696 ]
00:06:56.332  [2024-12-09 16:17:25.259096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:56.332  [2024-12-09 16:17:25.363018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:56.901  
00:06:56.901  INFO: shutting down applications...
00:06:56.901   16:17:26 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:56.901   16:17:26 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:06:56.901   16:17:26 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:06:56.901   16:17:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:06:56.901   16:17:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:06:56.901   16:17:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:06:56.901   16:17:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:06:56.901   16:17:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59696 ]]
00:06:56.901   16:17:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59696
00:06:56.901   16:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:06:56.901   16:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:56.901   16:17:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59696
00:06:56.901   16:17:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:57.471   16:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:57.471   16:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:57.471   16:17:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59696
00:06:57.471   16:17:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:58.040   16:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:58.040   16:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:58.040   16:17:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59696
00:06:58.040   16:17:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:58.608   16:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:58.608   16:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:58.608   16:17:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59696
00:06:58.608   16:17:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:59.175   16:17:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:59.175   16:17:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:59.175   16:17:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59696
00:06:59.175   16:17:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:59.434   16:17:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:59.434   16:17:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:59.434   16:17:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59696
00:06:59.434   16:17:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:07:00.002   16:17:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:07:00.002   16:17:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:00.002   16:17:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59696
00:07:00.002  SPDK target shutdown done
00:07:00.002  Success
00:07:00.002   16:17:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:07:00.002   16:17:29 json_config_extra_key -- json_config/common.sh@43 -- # break
00:07:00.002   16:17:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:07:00.002   16:17:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:07:00.002   16:17:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:07:00.002  
00:07:00.002  real	0m4.598s
00:07:00.002  user	0m3.989s
00:07:00.002  sys	0m0.618s
00:07:00.002  ************************************
00:07:00.002  END TEST json_config_extra_key
00:07:00.002  ************************************
00:07:00.002   16:17:29 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:00.002   16:17:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:07:00.002   16:17:29  -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:00.002   16:17:29  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:00.002   16:17:29  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:00.002   16:17:29  -- common/autotest_common.sh@10 -- # set +x
00:07:00.261  ************************************
00:07:00.261  START TEST alias_rpc
00:07:00.261  ************************************
00:07:00.261   16:17:29 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:00.261  * Looking for test storage...
00:07:00.261  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:07:00.261    16:17:29 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:00.261     16:17:29 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:07:00.261     16:17:29 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:00.261    16:17:29 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@345 -- # : 1
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:00.261     16:17:29 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:07:00.261     16:17:29 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:07:00.261     16:17:29 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:00.261     16:17:29 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:07:00.261     16:17:29 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:07:00.261     16:17:29 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:07:00.261     16:17:29 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:00.261     16:17:29 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:00.261    16:17:29 alias_rpc -- scripts/common.sh@368 -- # return 0
00:07:00.261    16:17:29 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:00.261    16:17:29 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:00.261  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.261  		--rc genhtml_branch_coverage=1
00:07:00.261  		--rc genhtml_function_coverage=1
00:07:00.261  		--rc genhtml_legend=1
00:07:00.261  		--rc geninfo_all_blocks=1
00:07:00.261  		--rc geninfo_unexecuted_blocks=1
00:07:00.261  		
00:07:00.261  		'
00:07:00.261    16:17:29 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:00.261  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.261  		--rc genhtml_branch_coverage=1
00:07:00.261  		--rc genhtml_function_coverage=1
00:07:00.261  		--rc genhtml_legend=1
00:07:00.261  		--rc geninfo_all_blocks=1
00:07:00.261  		--rc geninfo_unexecuted_blocks=1
00:07:00.261  		
00:07:00.261  		'
00:07:00.261    16:17:29 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:00.261  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.261  		--rc genhtml_branch_coverage=1
00:07:00.261  		--rc genhtml_function_coverage=1
00:07:00.261  		--rc genhtml_legend=1
00:07:00.261  		--rc geninfo_all_blocks=1
00:07:00.261  		--rc geninfo_unexecuted_blocks=1
00:07:00.261  		
00:07:00.261  		'
00:07:00.261    16:17:29 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:00.261  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:00.262  		--rc genhtml_branch_coverage=1
00:07:00.262  		--rc genhtml_function_coverage=1
00:07:00.262  		--rc genhtml_legend=1
00:07:00.262  		--rc geninfo_all_blocks=1
00:07:00.262  		--rc geninfo_unexecuted_blocks=1
00:07:00.262  		
00:07:00.262  		'
00:07:00.262   16:17:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:07:00.262   16:17:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59802
00:07:00.262   16:17:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:00.262   16:17:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59802
00:07:00.262   16:17:29 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59802 ']'
00:07:00.262   16:17:29 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:00.262   16:17:29 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:00.262   16:17:29 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:00.262  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:00.262   16:17:29 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:00.262   16:17:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:00.520  [2024-12-09 16:17:29.528765] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:00.520  [2024-12-09 16:17:29.529060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59802 ]
00:07:00.778  [2024-12-09 16:17:29.709268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:00.778  [2024-12-09 16:17:29.822440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:01.716   16:17:30 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:01.716   16:17:30 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:01.716   16:17:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:07:01.976   16:17:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59802
00:07:01.976   16:17:30 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59802 ']'
00:07:01.976   16:17:30 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59802
00:07:01.976    16:17:30 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:07:01.976   16:17:30 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:01.976    16:17:30 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59802
00:07:01.976  killing process with pid 59802
00:07:01.976   16:17:30 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:01.976   16:17:30 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:01.976   16:17:30 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59802'
00:07:01.976   16:17:30 alias_rpc -- common/autotest_common.sh@973 -- # kill 59802
00:07:01.976   16:17:30 alias_rpc -- common/autotest_common.sh@978 -- # wait 59802
00:07:04.513  ************************************
00:07:04.513  END TEST alias_rpc
00:07:04.513  ************************************
00:07:04.513  
00:07:04.514  real	0m4.139s
00:07:04.514  user	0m4.075s
00:07:04.514  sys	0m0.611s
00:07:04.514   16:17:33 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:04.514   16:17:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:04.514   16:17:33  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:07:04.514   16:17:33  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:07:04.514   16:17:33  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:04.514   16:17:33  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:04.514   16:17:33  -- common/autotest_common.sh@10 -- # set +x
00:07:04.514  ************************************
00:07:04.514  START TEST spdkcli_tcp
00:07:04.514  ************************************
00:07:04.514   16:17:33 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:07:04.514  * Looking for test storage...
00:07:04.514  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:07:04.514    16:17:33 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:04.514     16:17:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:07:04.514     16:17:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:04.514    16:17:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:04.514     16:17:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:07:04.514     16:17:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:07:04.514     16:17:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:04.514     16:17:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:07:04.514     16:17:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:07:04.514     16:17:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:07:04.514     16:17:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:04.514     16:17:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:04.514    16:17:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:07:04.514    16:17:33 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:04.514    16:17:33 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:04.514  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.514  		--rc genhtml_branch_coverage=1
00:07:04.514  		--rc genhtml_function_coverage=1
00:07:04.514  		--rc genhtml_legend=1
00:07:04.514  		--rc geninfo_all_blocks=1
00:07:04.514  		--rc geninfo_unexecuted_blocks=1
00:07:04.514  		
00:07:04.514  		'
00:07:04.514    16:17:33 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:04.514  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.514  		--rc genhtml_branch_coverage=1
00:07:04.514  		--rc genhtml_function_coverage=1
00:07:04.514  		--rc genhtml_legend=1
00:07:04.514  		--rc geninfo_all_blocks=1
00:07:04.514  		--rc geninfo_unexecuted_blocks=1
00:07:04.514  		
00:07:04.514  		'
00:07:04.514    16:17:33 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:04.514  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.514  		--rc genhtml_branch_coverage=1
00:07:04.514  		--rc genhtml_function_coverage=1
00:07:04.514  		--rc genhtml_legend=1
00:07:04.514  		--rc geninfo_all_blocks=1
00:07:04.514  		--rc geninfo_unexecuted_blocks=1
00:07:04.514  		
00:07:04.514  		'
00:07:04.514    16:17:33 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:04.514  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.514  		--rc genhtml_branch_coverage=1
00:07:04.514  		--rc genhtml_function_coverage=1
00:07:04.514  		--rc genhtml_legend=1
00:07:04.514  		--rc geninfo_all_blocks=1
00:07:04.514  		--rc geninfo_unexecuted_blocks=1
00:07:04.514  		
00:07:04.514  		'
00:07:04.514   16:17:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:07:04.514    16:17:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:07:04.514    16:17:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:07:04.514   16:17:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:07:04.514   16:17:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:07:04.514   16:17:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:07:04.514   16:17:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:07:04.514   16:17:33 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:04.514   16:17:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:04.514   16:17:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59915
00:07:04.514   16:17:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:07:04.514   16:17:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59915
00:07:04.514   16:17:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59915 ']'
00:07:04.514   16:17:33 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:04.514   16:17:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:04.514   16:17:33 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:04.514  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:04.514   16:17:33 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:04.514   16:17:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:04.774  [2024-12-09 16:17:33.756742] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:04.774  [2024-12-09 16:17:33.756896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59915 ]
00:07:04.774  [2024-12-09 16:17:33.923760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:05.034  [2024-12-09 16:17:34.043066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:05.034  [2024-12-09 16:17:34.043098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:05.972   16:17:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:05.972   16:17:34 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:07:05.972   16:17:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59932
00:07:05.972   16:17:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:07:05.972   16:17:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:07:05.972  [
00:07:05.972    "bdev_malloc_delete",
00:07:05.972    "bdev_malloc_create",
00:07:05.972    "bdev_null_resize",
00:07:05.972    "bdev_null_delete",
00:07:05.972    "bdev_null_create",
00:07:05.972    "bdev_nvme_cuse_unregister",
00:07:05.972    "bdev_nvme_cuse_register",
00:07:05.972    "bdev_opal_new_user",
00:07:05.972    "bdev_opal_set_lock_state",
00:07:05.972    "bdev_opal_delete",
00:07:05.972    "bdev_opal_get_info",
00:07:05.972    "bdev_opal_create",
00:07:05.972    "bdev_nvme_opal_revert",
00:07:05.972    "bdev_nvme_opal_init",
00:07:05.972    "bdev_nvme_send_cmd",
00:07:05.972    "bdev_nvme_set_keys",
00:07:05.972    "bdev_nvme_get_path_iostat",
00:07:05.972    "bdev_nvme_get_mdns_discovery_info",
00:07:05.972    "bdev_nvme_stop_mdns_discovery",
00:07:05.972    "bdev_nvme_start_mdns_discovery",
00:07:05.972    "bdev_nvme_set_multipath_policy",
00:07:05.972    "bdev_nvme_set_preferred_path",
00:07:05.972    "bdev_nvme_get_io_paths",
00:07:05.972    "bdev_nvme_remove_error_injection",
00:07:05.972    "bdev_nvme_add_error_injection",
00:07:05.972    "bdev_nvme_get_discovery_info",
00:07:05.972    "bdev_nvme_stop_discovery",
00:07:05.972    "bdev_nvme_start_discovery",
00:07:05.972    "bdev_nvme_get_controller_health_info",
00:07:05.972    "bdev_nvme_disable_controller",
00:07:05.972    "bdev_nvme_enable_controller",
00:07:05.972    "bdev_nvme_reset_controller",
00:07:05.972    "bdev_nvme_get_transport_statistics",
00:07:05.972    "bdev_nvme_apply_firmware",
00:07:05.972    "bdev_nvme_detach_controller",
00:07:05.972    "bdev_nvme_get_controllers",
00:07:05.972    "bdev_nvme_attach_controller",
00:07:05.972    "bdev_nvme_set_hotplug",
00:07:05.972    "bdev_nvme_set_options",
00:07:05.972    "bdev_passthru_delete",
00:07:05.972    "bdev_passthru_create",
00:07:05.972    "bdev_lvol_set_parent_bdev",
00:07:05.972    "bdev_lvol_set_parent",
00:07:05.972    "bdev_lvol_check_shallow_copy",
00:07:05.972    "bdev_lvol_start_shallow_copy",
00:07:05.972    "bdev_lvol_grow_lvstore",
00:07:05.972    "bdev_lvol_get_lvols",
00:07:05.972    "bdev_lvol_get_lvstores",
00:07:05.972    "bdev_lvol_delete",
00:07:05.972    "bdev_lvol_set_read_only",
00:07:05.972    "bdev_lvol_resize",
00:07:05.972    "bdev_lvol_decouple_parent",
00:07:05.972    "bdev_lvol_inflate",
00:07:05.972    "bdev_lvol_rename",
00:07:05.973    "bdev_lvol_clone_bdev",
00:07:05.973    "bdev_lvol_clone",
00:07:05.973    "bdev_lvol_snapshot",
00:07:05.973    "bdev_lvol_create",
00:07:05.973    "bdev_lvol_delete_lvstore",
00:07:05.973    "bdev_lvol_rename_lvstore",
00:07:05.973    "bdev_lvol_create_lvstore",
00:07:05.973    "bdev_raid_set_options",
00:07:05.973    "bdev_raid_remove_base_bdev",
00:07:05.973    "bdev_raid_add_base_bdev",
00:07:05.973    "bdev_raid_delete",
00:07:05.973    "bdev_raid_create",
00:07:05.973    "bdev_raid_get_bdevs",
00:07:05.973    "bdev_error_inject_error",
00:07:05.973    "bdev_error_delete",
00:07:05.973    "bdev_error_create",
00:07:05.973    "bdev_split_delete",
00:07:05.973    "bdev_split_create",
00:07:05.973    "bdev_delay_delete",
00:07:05.973    "bdev_delay_create",
00:07:05.973    "bdev_delay_update_latency",
00:07:05.973    "bdev_zone_block_delete",
00:07:05.973    "bdev_zone_block_create",
00:07:05.973    "blobfs_create",
00:07:05.973    "blobfs_detect",
00:07:05.973    "blobfs_set_cache_size",
00:07:05.973    "bdev_xnvme_delete",
00:07:05.973    "bdev_xnvme_create",
00:07:05.973    "bdev_aio_delete",
00:07:05.973    "bdev_aio_rescan",
00:07:05.973    "bdev_aio_create",
00:07:05.973    "bdev_ftl_set_property",
00:07:05.973    "bdev_ftl_get_properties",
00:07:05.973    "bdev_ftl_get_stats",
00:07:05.973    "bdev_ftl_unmap",
00:07:05.973    "bdev_ftl_unload",
00:07:05.973    "bdev_ftl_delete",
00:07:05.973    "bdev_ftl_load",
00:07:05.973    "bdev_ftl_create",
00:07:05.973    "bdev_virtio_attach_controller",
00:07:05.973    "bdev_virtio_scsi_get_devices",
00:07:05.973    "bdev_virtio_detach_controller",
00:07:05.973    "bdev_virtio_blk_set_hotplug",
00:07:05.973    "bdev_iscsi_delete",
00:07:05.973    "bdev_iscsi_create",
00:07:05.973    "bdev_iscsi_set_options",
00:07:05.973    "accel_error_inject_error",
00:07:05.973    "ioat_scan_accel_module",
00:07:05.973    "dsa_scan_accel_module",
00:07:05.973    "iaa_scan_accel_module",
00:07:05.973    "keyring_file_remove_key",
00:07:05.973    "keyring_file_add_key",
00:07:05.973    "keyring_linux_set_options",
00:07:05.973    "fsdev_aio_delete",
00:07:05.973    "fsdev_aio_create",
00:07:05.973    "iscsi_get_histogram",
00:07:05.973    "iscsi_enable_histogram",
00:07:05.973    "iscsi_set_options",
00:07:05.973    "iscsi_get_auth_groups",
00:07:05.973    "iscsi_auth_group_remove_secret",
00:07:05.973    "iscsi_auth_group_add_secret",
00:07:05.973    "iscsi_delete_auth_group",
00:07:05.973    "iscsi_create_auth_group",
00:07:05.973    "iscsi_set_discovery_auth",
00:07:05.973    "iscsi_get_options",
00:07:05.973    "iscsi_target_node_request_logout",
00:07:05.973    "iscsi_target_node_set_redirect",
00:07:05.973    "iscsi_target_node_set_auth",
00:07:05.973    "iscsi_target_node_add_lun",
00:07:05.973    "iscsi_get_stats",
00:07:05.973    "iscsi_get_connections",
00:07:05.973    "iscsi_portal_group_set_auth",
00:07:05.973    "iscsi_start_portal_group",
00:07:05.973    "iscsi_delete_portal_group",
00:07:05.973    "iscsi_create_portal_group",
00:07:05.973    "iscsi_get_portal_groups",
00:07:05.973    "iscsi_delete_target_node",
00:07:05.973    "iscsi_target_node_remove_pg_ig_maps",
00:07:05.973    "iscsi_target_node_add_pg_ig_maps",
00:07:05.973    "iscsi_create_target_node",
00:07:05.973    "iscsi_get_target_nodes",
00:07:05.973    "iscsi_delete_initiator_group",
00:07:05.973    "iscsi_initiator_group_remove_initiators",
00:07:05.973    "iscsi_initiator_group_add_initiators",
00:07:05.973    "iscsi_create_initiator_group",
00:07:05.973    "iscsi_get_initiator_groups",
00:07:05.973    "nvmf_set_crdt",
00:07:05.973    "nvmf_set_config",
00:07:05.973    "nvmf_set_max_subsystems",
00:07:05.973    "nvmf_stop_mdns_prr",
00:07:05.973    "nvmf_publish_mdns_prr",
00:07:05.973    "nvmf_subsystem_get_listeners",
00:07:05.973    "nvmf_subsystem_get_qpairs",
00:07:05.973    "nvmf_subsystem_get_controllers",
00:07:05.973    "nvmf_get_stats",
00:07:05.973    "nvmf_get_transports",
00:07:05.973    "nvmf_create_transport",
00:07:05.973    "nvmf_get_targets",
00:07:05.973    "nvmf_delete_target",
00:07:05.973    "nvmf_create_target",
00:07:05.973    "nvmf_subsystem_allow_any_host",
00:07:05.973    "nvmf_subsystem_set_keys",
00:07:05.973    "nvmf_subsystem_remove_host",
00:07:05.973    "nvmf_subsystem_add_host",
00:07:05.973    "nvmf_ns_remove_host",
00:07:05.973    "nvmf_ns_add_host",
00:07:05.973    "nvmf_subsystem_remove_ns",
00:07:05.973    "nvmf_subsystem_set_ns_ana_group",
00:07:05.973    "nvmf_subsystem_add_ns",
00:07:05.973    "nvmf_subsystem_listener_set_ana_state",
00:07:05.973    "nvmf_discovery_get_referrals",
00:07:05.973    "nvmf_discovery_remove_referral",
00:07:05.973    "nvmf_discovery_add_referral",
00:07:05.973    "nvmf_subsystem_remove_listener",
00:07:05.973    "nvmf_subsystem_add_listener",
00:07:05.973    "nvmf_delete_subsystem",
00:07:05.973    "nvmf_create_subsystem",
00:07:05.973    "nvmf_get_subsystems",
00:07:05.973    "env_dpdk_get_mem_stats",
00:07:05.973    "nbd_get_disks",
00:07:05.973    "nbd_stop_disk",
00:07:05.973    "nbd_start_disk",
00:07:05.973    "ublk_recover_disk",
00:07:05.973    "ublk_get_disks",
00:07:05.973    "ublk_stop_disk",
00:07:05.973    "ublk_start_disk",
00:07:05.973    "ublk_destroy_target",
00:07:05.973    "ublk_create_target",
00:07:05.973    "virtio_blk_create_transport",
00:07:05.973    "virtio_blk_get_transports",
00:07:05.973    "vhost_controller_set_coalescing",
00:07:05.973    "vhost_get_controllers",
00:07:05.973    "vhost_delete_controller",
00:07:05.973    "vhost_create_blk_controller",
00:07:05.973    "vhost_scsi_controller_remove_target",
00:07:05.973    "vhost_scsi_controller_add_target",
00:07:05.973    "vhost_start_scsi_controller",
00:07:05.973    "vhost_create_scsi_controller",
00:07:05.973    "thread_set_cpumask",
00:07:05.973    "scheduler_set_options",
00:07:05.973    "framework_get_governor",
00:07:05.973    "framework_get_scheduler",
00:07:05.973    "framework_set_scheduler",
00:07:05.973    "framework_get_reactors",
00:07:05.973    "thread_get_io_channels",
00:07:05.973    "thread_get_pollers",
00:07:05.973    "thread_get_stats",
00:07:05.973    "framework_monitor_context_switch",
00:07:05.973    "spdk_kill_instance",
00:07:05.973    "log_enable_timestamps",
00:07:05.973    "log_get_flags",
00:07:05.973    "log_clear_flag",
00:07:05.973    "log_set_flag",
00:07:05.973    "log_get_level",
00:07:05.973    "log_set_level",
00:07:05.973    "log_get_print_level",
00:07:05.973    "log_set_print_level",
00:07:05.973    "framework_enable_cpumask_locks",
00:07:05.973    "framework_disable_cpumask_locks",
00:07:05.973    "framework_wait_init",
00:07:05.973    "framework_start_init",
00:07:05.973    "scsi_get_devices",
00:07:05.973    "bdev_get_histogram",
00:07:05.973    "bdev_enable_histogram",
00:07:05.973    "bdev_set_qos_limit",
00:07:05.973    "bdev_set_qd_sampling_period",
00:07:05.973    "bdev_get_bdevs",
00:07:05.973    "bdev_reset_iostat",
00:07:05.973    "bdev_get_iostat",
00:07:05.973    "bdev_examine",
00:07:05.973    "bdev_wait_for_examine",
00:07:05.973    "bdev_set_options",
00:07:05.973    "accel_get_stats",
00:07:05.973    "accel_set_options",
00:07:05.973    "accel_set_driver",
00:07:05.973    "accel_crypto_key_destroy",
00:07:05.973    "accel_crypto_keys_get",
00:07:05.973    "accel_crypto_key_create",
00:07:05.973    "accel_assign_opc",
00:07:05.973    "accel_get_module_info",
00:07:05.973    "accel_get_opc_assignments",
00:07:05.973    "vmd_rescan",
00:07:05.973    "vmd_remove_device",
00:07:05.973    "vmd_enable",
00:07:05.973    "sock_get_default_impl",
00:07:05.973    "sock_set_default_impl",
00:07:05.973    "sock_impl_set_options",
00:07:05.973    "sock_impl_get_options",
00:07:05.973    "iobuf_get_stats",
00:07:05.973    "iobuf_set_options",
00:07:05.973    "keyring_get_keys",
00:07:05.973    "framework_get_pci_devices",
00:07:05.973    "framework_get_config",
00:07:05.973    "framework_get_subsystems",
00:07:05.973    "fsdev_set_opts",
00:07:05.973    "fsdev_get_opts",
00:07:05.973    "trace_get_info",
00:07:05.973    "trace_get_tpoint_group_mask",
00:07:05.973    "trace_disable_tpoint_group",
00:07:05.973    "trace_enable_tpoint_group",
00:07:05.973    "trace_clear_tpoint_mask",
00:07:05.973    "trace_set_tpoint_mask",
00:07:05.973    "notify_get_notifications",
00:07:05.973    "notify_get_types",
00:07:05.973    "spdk_get_version",
00:07:05.973    "rpc_get_methods"
00:07:05.973  ]
00:07:05.973   16:17:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:07:05.973   16:17:35 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:05.973   16:17:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:06.234   16:17:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:07:06.234   16:17:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59915
00:07:06.234   16:17:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59915 ']'
00:07:06.234   16:17:35 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59915
00:07:06.234    16:17:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:07:06.234   16:17:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:06.234    16:17:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59915
00:07:06.234  killing process with pid 59915
00:07:06.234   16:17:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:06.234   16:17:35 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:06.234   16:17:35 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59915'
00:07:06.234   16:17:35 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59915
00:07:06.234   16:17:35 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59915
00:07:08.776  ************************************
00:07:08.776  END TEST spdkcli_tcp
00:07:08.776  ************************************
00:07:08.776  
00:07:08.776  real	0m4.212s
00:07:08.776  user	0m7.467s
00:07:08.776  sys	0m0.661s
00:07:08.776   16:17:37 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:08.776   16:17:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:08.776   16:17:37  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:08.776   16:17:37  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:08.776   16:17:37  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:08.776   16:17:37  -- common/autotest_common.sh@10 -- # set +x
00:07:08.776  ************************************
00:07:08.776  START TEST dpdk_mem_utility
00:07:08.776  ************************************
00:07:08.776   16:17:37 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:08.776  * Looking for test storage...
00:07:08.776  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:07:08.776    16:17:37 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:08.776     16:17:37 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version
00:07:08.776     16:17:37 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:08.776    16:17:37 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:08.776     16:17:37 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:07:08.776     16:17:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:07:08.776     16:17:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:08.776     16:17:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:07:08.776     16:17:37 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:07:08.776     16:17:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:07:08.776     16:17:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:08.776     16:17:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:08.776    16:17:37 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:07:08.776    16:17:37 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:08.776    16:17:37 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:08.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.776  		--rc genhtml_branch_coverage=1
00:07:08.776  		--rc genhtml_function_coverage=1
00:07:08.776  		--rc genhtml_legend=1
00:07:08.776  		--rc geninfo_all_blocks=1
00:07:08.776  		--rc geninfo_unexecuted_blocks=1
00:07:08.776  		
00:07:08.776  		'
00:07:08.776    16:17:37 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:08.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.776  		--rc genhtml_branch_coverage=1
00:07:08.776  		--rc genhtml_function_coverage=1
00:07:08.776  		--rc genhtml_legend=1
00:07:08.776  		--rc geninfo_all_blocks=1
00:07:08.776  		--rc geninfo_unexecuted_blocks=1
00:07:08.776  		
00:07:08.776  		'
00:07:08.777    16:17:37 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:08.777  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.777  		--rc genhtml_branch_coverage=1
00:07:08.777  		--rc genhtml_function_coverage=1
00:07:08.777  		--rc genhtml_legend=1
00:07:08.777  		--rc geninfo_all_blocks=1
00:07:08.777  		--rc geninfo_unexecuted_blocks=1
00:07:08.777  		
00:07:08.777  		'
00:07:08.777    16:17:37 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:08.777  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:08.777  		--rc genhtml_branch_coverage=1
00:07:08.777  		--rc genhtml_function_coverage=1
00:07:08.777  		--rc genhtml_legend=1
00:07:08.777  		--rc geninfo_all_blocks=1
00:07:08.777  		--rc geninfo_unexecuted_blocks=1
00:07:08.777  		
00:07:08.777  		'
00:07:08.777   16:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:07:08.777   16:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60037
00:07:08.777   16:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:08.777   16:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60037
00:07:08.777   16:17:37 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60037 ']'
00:07:08.777   16:17:37 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:08.777   16:17:37 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:08.777   16:17:37 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:08.777  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:08.777   16:17:37 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:08.777   16:17:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:09.037  [2024-12-09 16:17:38.031556] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:09.037  [2024-12-09 16:17:38.031681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60037 ]
00:07:09.037  [2024-12-09 16:17:38.211791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:09.306  [2024-12-09 16:17:38.324959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:10.299   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:10.299   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:07:10.299   16:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:07:10.299   16:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:07:10.299   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:10.299   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:10.299  {
00:07:10.299  "filename": "/tmp/spdk_mem_dump.txt"
00:07:10.299  }
00:07:10.299   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:10.299   16:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:07:10.299  DPDK memory size 824.000000 MiB in 1 heap(s)
00:07:10.299  1 heaps totaling size 824.000000 MiB
00:07:10.299    size:  824.000000 MiB heap id: 0
00:07:10.299  end heaps----------
00:07:10.299  9 mempools totaling size 603.782043 MiB
00:07:10.299    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:07:10.299    size:  158.602051 MiB name: PDU_data_out_Pool
00:07:10.299    size:  100.555481 MiB name: bdev_io_60037
00:07:10.299    size:   50.003479 MiB name: msgpool_60037
00:07:10.299    size:   36.509338 MiB name: fsdev_io_60037
00:07:10.299    size:   21.763794 MiB name: PDU_Pool
00:07:10.299    size:   19.513306 MiB name: SCSI_TASK_Pool
00:07:10.299    size:    4.133484 MiB name: evtpool_60037
00:07:10.299    size:    0.026123 MiB name: Session_Pool
00:07:10.299  end mempools-------
00:07:10.299  6 memzones totaling size 4.142822 MiB
00:07:10.299    size:    1.000366 MiB name: RG_ring_0_60037
00:07:10.299    size:    1.000366 MiB name: RG_ring_1_60037
00:07:10.299    size:    1.000366 MiB name: RG_ring_4_60037
00:07:10.299    size:    1.000366 MiB name: RG_ring_5_60037
00:07:10.299    size:    0.125366 MiB name: RG_ring_2_60037
00:07:10.299    size:    0.015991 MiB name: RG_ring_3_60037
00:07:10.299  end memzones-------
00:07:10.299   16:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:07:10.299  heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18
00:07:10.299    list of free elements. size: 16.779419 MiB
00:07:10.299      element at address: 0x200006400000 with size:    1.995972 MiB
00:07:10.299      element at address: 0x20000a600000 with size:    1.995972 MiB
00:07:10.299      element at address: 0x200003e00000 with size:    1.991028 MiB
00:07:10.299      element at address: 0x200019500040 with size:    0.999939 MiB
00:07:10.299      element at address: 0x200019900040 with size:    0.999939 MiB
00:07:10.299      element at address: 0x200019a00000 with size:    0.999084 MiB
00:07:10.299      element at address: 0x200032600000 with size:    0.994324 MiB
00:07:10.299      element at address: 0x200000400000 with size:    0.992004 MiB
00:07:10.299      element at address: 0x200019200000 with size:    0.959656 MiB
00:07:10.299      element at address: 0x200019d00040 with size:    0.936401 MiB
00:07:10.299      element at address: 0x200000200000 with size:    0.716980 MiB
00:07:10.299      element at address: 0x20001b400000 with size:    0.560730 MiB
00:07:10.299      element at address: 0x200000c00000 with size:    0.489197 MiB
00:07:10.299      element at address: 0x200019600000 with size:    0.487976 MiB
00:07:10.299      element at address: 0x200019e00000 with size:    0.485413 MiB
00:07:10.299      element at address: 0x200012c00000 with size:    0.433472 MiB
00:07:10.299      element at address: 0x200028800000 with size:    0.390442 MiB
00:07:10.299      element at address: 0x200000800000 with size:    0.350891 MiB
00:07:10.299    list of standard malloc elements. size: 199.289673 MiB
00:07:10.299      element at address: 0x20000a7fef80 with size:  132.000183 MiB
00:07:10.299      element at address: 0x2000065fef80 with size:   64.000183 MiB
00:07:10.299      element at address: 0x2000193fff80 with size:    1.000183 MiB
00:07:10.299      element at address: 0x2000197fff80 with size:    1.000183 MiB
00:07:10.299      element at address: 0x200019bfff80 with size:    1.000183 MiB
00:07:10.299      element at address: 0x2000003d9e80 with size:    0.140808 MiB
00:07:10.299      element at address: 0x200019deff40 with size:    0.062683 MiB
00:07:10.299      element at address: 0x2000003fdf40 with size:    0.007996 MiB
00:07:10.299      element at address: 0x20000a5ff040 with size:    0.000427 MiB
00:07:10.299      element at address: 0x200019defdc0 with size:    0.000366 MiB
00:07:10.299      element at address: 0x200012bff040 with size:    0.000305 MiB
00:07:10.299      element at address: 0x2000002d7b00 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000003d9d80 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fdf40 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fe040 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fe140 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fe240 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fe340 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fe440 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fe540 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fe640 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fe740 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fe840 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fe940 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fea40 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004feb40 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fec40 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fed40 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fee40 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004fef40 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004ff040 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004ff140 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004ff240 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004ff340 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004ff440 with size:    0.000244 MiB
00:07:10.299      element at address: 0x2000004ff540 with size:    0.000244 MiB
00:07:10.300      element at address: 0x2000004ff640 with size:    0.000244 MiB
00:07:10.300      element at address: 0x2000004ff740 with size:    0.000244 MiB
00:07:10.300      element at address: 0x2000004ff840 with size:    0.000244 MiB
00:07:10.300      element at address: 0x2000004ff940 with size:    0.000244 MiB
00:07:10.300      element at address: 0x2000004ffbc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x2000004ffcc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x2000004ffdc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087e1c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087e2c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087e3c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087e4c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087e5c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087e6c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087e7c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087e8c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087e9c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087eac0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087ebc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087ecc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087edc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087eec0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087efc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087f0c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087f1c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087f2c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087f3c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000087f4c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x2000008ff800 with size:    0.000244 MiB
00:07:10.300      element at address: 0x2000008ffa80 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7d3c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7d4c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7d5c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7d6c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7d7c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7d8c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7d9c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7dac0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7dbc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7dcc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7ddc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7dec0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7dfc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7e0c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7e1c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7e2c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7e3c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7e4c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7e5c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7e6c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7e7c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7e8c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7e9c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7eac0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000c7ebc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000cfef00 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200000cff000 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ff200 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ff300 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ff400 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ff500 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ff600 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ff700 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ff800 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ff900 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ffa00 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ffb00 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ffc00 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ffd00 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5ffe00 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20000a5fff00 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bff180 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bff280 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bff380 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bff480 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bff580 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bff680 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bff780 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bff880 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bff980 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bffa80 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bffb80 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bffc80 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012bfff00 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012c6ef80 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012c6f080 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012c6f180 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012c6f280 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012c6f380 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012c6f480 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012c6f580 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012c6f680 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012c6f780 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012c6f880 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200012cefbc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x2000192fdd00 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967cec0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967cfc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967d0c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967d1c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967d2c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967d3c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967d4c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967d5c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967d6c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967d7c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967d8c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001967d9c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x2000196fdd00 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200019affc40 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200019defbc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200019defcc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x200019ebc680 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b48f8c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b48f9c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b48fac0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b48fbc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b48fcc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b48fdc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b48fec0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b48ffc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4900c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4901c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4902c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4903c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4904c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4905c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4906c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4907c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4908c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4909c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b490ac0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b490bc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b490cc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b490dc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b490ec0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b490fc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4910c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4911c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4912c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4913c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4914c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4915c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4916c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4917c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4918c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4919c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b491ac0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b491bc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b491cc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b491dc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b491ec0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b491fc0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4920c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4921c0 with size:    0.000244 MiB
00:07:10.300      element at address: 0x20001b4922c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4923c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4924c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4925c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4926c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4927c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4928c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4929c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b492ac0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b492bc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b492cc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b492dc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b492ec0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b492fc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4930c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4931c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4932c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4933c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4934c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4935c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4936c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4937c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4938c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4939c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b493ac0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b493bc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b493cc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b493dc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b493ec0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b493fc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4940c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4941c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4942c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4943c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4944c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4945c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4946c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4947c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4948c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4949c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b494ac0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b494bc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b494cc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b494dc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b494ec0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b494fc0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4950c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4951c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4952c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20001b4953c0 with size:    0.000244 MiB
00:07:10.301      element at address: 0x200028863f40 with size:    0.000244 MiB
00:07:10.301      element at address: 0x200028864040 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886ad00 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886af80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886b080 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886b180 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886b280 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886b380 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886b480 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886b580 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886b680 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886b780 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886b880 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886b980 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886ba80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886bb80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886bc80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886bd80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886be80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886bf80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886c080 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886c180 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886c280 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886c380 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886c480 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886c580 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886c680 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886c780 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886c880 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886c980 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886ca80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886cb80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886cc80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886cd80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886ce80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886cf80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886d080 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886d180 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886d280 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886d380 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886d480 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886d580 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886d680 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886d780 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886d880 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886d980 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886da80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886db80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886dc80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886dd80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886de80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886df80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886e080 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886e180 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886e280 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886e380 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886e480 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886e580 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886e680 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886e780 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886e880 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886e980 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886ea80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886eb80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886ec80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886ed80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886ee80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886ef80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886f080 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886f180 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886f280 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886f380 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886f480 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886f580 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886f680 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886f780 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886f880 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886f980 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886fa80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886fb80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886fc80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886fd80 with size:    0.000244 MiB
00:07:10.301      element at address: 0x20002886fe80 with size:    0.000244 MiB
00:07:10.301    list of memzone associated elements. size: 607.930908 MiB
00:07:10.301      element at address: 0x20001b4954c0 with size:  211.416809 MiB
00:07:10.301        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:07:10.301      element at address: 0x20002886ff80 with size:  157.562622 MiB
00:07:10.301        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:07:10.301      element at address: 0x200012df1e40 with size:  100.055115 MiB
00:07:10.301        associated memzone info: size:  100.054932 MiB name: MP_bdev_io_60037_0
00:07:10.301      element at address: 0x200000dff340 with size:   48.003113 MiB
00:07:10.301        associated memzone info: size:   48.002930 MiB name: MP_msgpool_60037_0
00:07:10.301      element at address: 0x200003ffdb40 with size:   36.008972 MiB
00:07:10.301        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_60037_0
00:07:10.301      element at address: 0x200019fbe900 with size:   20.255615 MiB
00:07:10.301        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:07:10.301      element at address: 0x2000327feb00 with size:   18.005127 MiB
00:07:10.301        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:07:10.301      element at address: 0x2000004ffec0 with size:    3.000305 MiB
00:07:10.301        associated memzone info: size:    3.000122 MiB name: MP_evtpool_60037_0
00:07:10.301      element at address: 0x2000009ffdc0 with size:    2.000549 MiB
00:07:10.301        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_60037
00:07:10.302      element at address: 0x2000002d7c00 with size:    1.008179 MiB
00:07:10.302        associated memzone info: size:    1.007996 MiB name: MP_evtpool_60037
00:07:10.302      element at address: 0x2000196fde00 with size:    1.008179 MiB
00:07:10.302        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:07:10.302      element at address: 0x200019ebc780 with size:    1.008179 MiB
00:07:10.302        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:07:10.302      element at address: 0x2000192fde00 with size:    1.008179 MiB
00:07:10.302        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:07:10.302      element at address: 0x200012cefcc0 with size:    1.008179 MiB
00:07:10.302        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:07:10.302      element at address: 0x200000cff100 with size:    1.000549 MiB
00:07:10.302        associated memzone info: size:    1.000366 MiB name: RG_ring_0_60037
00:07:10.302      element at address: 0x2000008ffb80 with size:    1.000549 MiB
00:07:10.302        associated memzone info: size:    1.000366 MiB name: RG_ring_1_60037
00:07:10.302      element at address: 0x200019affd40 with size:    1.000549 MiB
00:07:10.302        associated memzone info: size:    1.000366 MiB name: RG_ring_4_60037
00:07:10.302      element at address: 0x2000326fe8c0 with size:    1.000549 MiB
00:07:10.302        associated memzone info: size:    1.000366 MiB name: RG_ring_5_60037
00:07:10.302      element at address: 0x20000087f5c0 with size:    0.500549 MiB
00:07:10.302        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_60037
00:07:10.302      element at address: 0x200000c7ecc0 with size:    0.500549 MiB
00:07:10.302        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_60037
00:07:10.302      element at address: 0x20001967dac0 with size:    0.500549 MiB
00:07:10.302        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:07:10.302      element at address: 0x200012c6f980 with size:    0.500549 MiB
00:07:10.302        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:07:10.302      element at address: 0x200019e7c440 with size:    0.250549 MiB
00:07:10.302        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:07:10.302      element at address: 0x2000002b78c0 with size:    0.125549 MiB
00:07:10.302        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_60037
00:07:10.302      element at address: 0x20000085df80 with size:    0.125549 MiB
00:07:10.302        associated memzone info: size:    0.125366 MiB name: RG_ring_2_60037
00:07:10.302      element at address: 0x2000192f5ac0 with size:    0.031799 MiB
00:07:10.302        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:07:10.302      element at address: 0x200028864140 with size:    0.023804 MiB
00:07:10.302        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:07:10.302      element at address: 0x200000859d40 with size:    0.016174 MiB
00:07:10.302        associated memzone info: size:    0.015991 MiB name: RG_ring_3_60037
00:07:10.302      element at address: 0x20002886a2c0 with size:    0.002502 MiB
00:07:10.302        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:07:10.302      element at address: 0x2000004ffa40 with size:    0.000366 MiB
00:07:10.302        associated memzone info: size:    0.000183 MiB name: MP_msgpool_60037
00:07:10.302      element at address: 0x2000008ff900 with size:    0.000366 MiB
00:07:10.302        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_60037
00:07:10.302      element at address: 0x200012bffd80 with size:    0.000366 MiB
00:07:10.302        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_60037
00:07:10.302      element at address: 0x20002886ae00 with size:    0.000366 MiB
00:07:10.302        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:07:10.302   16:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:07:10.302   16:17:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60037
00:07:10.302   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60037 ']'
00:07:10.302   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60037
00:07:10.302    16:17:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:07:10.302   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:10.302    16:17:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60037
00:07:10.302   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:10.302  killing process with pid 60037
00:07:10.302   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:10.302   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60037'
00:07:10.302   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60037
00:07:10.302   16:17:39 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60037
00:07:12.840  
00:07:12.840  real	0m4.082s
00:07:12.840  user	0m3.985s
00:07:12.840  sys	0m0.588s
00:07:12.840   16:17:41 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:12.840   16:17:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:12.840  ************************************
00:07:12.840  END TEST dpdk_mem_utility
00:07:12.840  ************************************
00:07:12.840   16:17:41  -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:07:12.840   16:17:41  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:12.840   16:17:41  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:12.840   16:17:41  -- common/autotest_common.sh@10 -- # set +x
00:07:12.840  ************************************
00:07:12.840  START TEST event
00:07:12.840  ************************************
00:07:12.840   16:17:41 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:07:12.840  * Looking for test storage...
00:07:12.840  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:07:12.840    16:17:41 event -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:12.840     16:17:41 event -- common/autotest_common.sh@1711 -- # lcov --version
00:07:12.840     16:17:41 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:13.099    16:17:42 event -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:13.099    16:17:42 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:13.099    16:17:42 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:13.099    16:17:42 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:13.099    16:17:42 event -- scripts/common.sh@336 -- # IFS=.-:
00:07:13.099    16:17:42 event -- scripts/common.sh@336 -- # read -ra ver1
00:07:13.099    16:17:42 event -- scripts/common.sh@337 -- # IFS=.-:
00:07:13.099    16:17:42 event -- scripts/common.sh@337 -- # read -ra ver2
00:07:13.099    16:17:42 event -- scripts/common.sh@338 -- # local 'op=<'
00:07:13.099    16:17:42 event -- scripts/common.sh@340 -- # ver1_l=2
00:07:13.099    16:17:42 event -- scripts/common.sh@341 -- # ver2_l=1
00:07:13.099    16:17:42 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:13.099    16:17:42 event -- scripts/common.sh@344 -- # case "$op" in
00:07:13.099    16:17:42 event -- scripts/common.sh@345 -- # : 1
00:07:13.099    16:17:42 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:13.100    16:17:42 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:13.100     16:17:42 event -- scripts/common.sh@365 -- # decimal 1
00:07:13.100     16:17:42 event -- scripts/common.sh@353 -- # local d=1
00:07:13.100     16:17:42 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:13.100     16:17:42 event -- scripts/common.sh@355 -- # echo 1
00:07:13.100    16:17:42 event -- scripts/common.sh@365 -- # ver1[v]=1
00:07:13.100     16:17:42 event -- scripts/common.sh@366 -- # decimal 2
00:07:13.100     16:17:42 event -- scripts/common.sh@353 -- # local d=2
00:07:13.100     16:17:42 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:13.100     16:17:42 event -- scripts/common.sh@355 -- # echo 2
00:07:13.100    16:17:42 event -- scripts/common.sh@366 -- # ver2[v]=2
00:07:13.100    16:17:42 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:13.100    16:17:42 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:13.100    16:17:42 event -- scripts/common.sh@368 -- # return 0
00:07:13.100    16:17:42 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:13.100    16:17:42 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:13.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.100  		--rc genhtml_branch_coverage=1
00:07:13.100  		--rc genhtml_function_coverage=1
00:07:13.100  		--rc genhtml_legend=1
00:07:13.100  		--rc geninfo_all_blocks=1
00:07:13.100  		--rc geninfo_unexecuted_blocks=1
00:07:13.100  		
00:07:13.100  		'
00:07:13.100    16:17:42 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:13.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.100  		--rc genhtml_branch_coverage=1
00:07:13.100  		--rc genhtml_function_coverage=1
00:07:13.100  		--rc genhtml_legend=1
00:07:13.100  		--rc geninfo_all_blocks=1
00:07:13.100  		--rc geninfo_unexecuted_blocks=1
00:07:13.100  		
00:07:13.100  		'
00:07:13.100    16:17:42 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:13.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.100  		--rc genhtml_branch_coverage=1
00:07:13.100  		--rc genhtml_function_coverage=1
00:07:13.100  		--rc genhtml_legend=1
00:07:13.100  		--rc geninfo_all_blocks=1
00:07:13.100  		--rc geninfo_unexecuted_blocks=1
00:07:13.100  		
00:07:13.100  		'
00:07:13.100    16:17:42 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:13.100  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.100  		--rc genhtml_branch_coverage=1
00:07:13.100  		--rc genhtml_function_coverage=1
00:07:13.100  		--rc genhtml_legend=1
00:07:13.100  		--rc geninfo_all_blocks=1
00:07:13.100  		--rc geninfo_unexecuted_blocks=1
00:07:13.100  		
00:07:13.100  		'
00:07:13.100   16:17:42 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:07:13.100    16:17:42 event -- bdev/nbd_common.sh@6 -- # set -e
00:07:13.100   16:17:42 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:13.100   16:17:42 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:07:13.100   16:17:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:13.100   16:17:42 event -- common/autotest_common.sh@10 -- # set +x
00:07:13.100  ************************************
00:07:13.100  START TEST event_perf
00:07:13.100  ************************************
00:07:13.100   16:17:42 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:13.100  Running I/O for 1 seconds...[2024-12-09 16:17:42.151984] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:13.100  [2024-12-09 16:17:42.152195] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60146 ]
00:07:13.359  [2024-12-09 16:17:42.336172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:13.359  [2024-12-09 16:17:42.460001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:13.359  [2024-12-09 16:17:42.460092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:13.359  [2024-12-09 16:17:42.460234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:13.359  [2024-12-09 16:17:42.460265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:14.737  Running I/O for 1 seconds...
00:07:14.737  lcore  0:   211671
00:07:14.737  lcore  1:   211673
00:07:14.737  lcore  2:   211672
00:07:14.737  lcore  3:   211672
00:07:14.737  done.
00:07:14.737  
00:07:14.737  real	0m1.606s
00:07:14.737  user	0m4.336s
00:07:14.737  sys	0m0.149s
00:07:14.738  ************************************
00:07:14.738  END TEST event_perf
00:07:14.738  ************************************
00:07:14.738   16:17:43 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:14.738   16:17:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:07:14.738   16:17:43 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:07:14.738   16:17:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:14.738   16:17:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:14.738   16:17:43 event -- common/autotest_common.sh@10 -- # set +x
00:07:14.738  ************************************
00:07:14.738  START TEST event_reactor
00:07:14.738  ************************************
00:07:14.738   16:17:43 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:07:14.738  [2024-12-09 16:17:43.834736] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:14.738  [2024-12-09 16:17:43.834846] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60185 ]
00:07:14.997  [2024-12-09 16:17:44.014658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:14.997  [2024-12-09 16:17:44.130997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:16.379  test_start
00:07:16.379  oneshot
00:07:16.379  tick 100
00:07:16.379  tick 100
00:07:16.379  tick 250
00:07:16.379  tick 100
00:07:16.379  tick 100
00:07:16.379  tick 250
00:07:16.379  tick 100
00:07:16.379  tick 500
00:07:16.379  tick 100
00:07:16.379  tick 100
00:07:16.379  tick 250
00:07:16.379  tick 100
00:07:16.379  tick 100
00:07:16.379  test_end
00:07:16.379  
00:07:16.379  real	0m1.573s
00:07:16.379  user	0m1.340s
00:07:16.379  sys	0m0.125s
00:07:16.379  ************************************
00:07:16.379  END TEST event_reactor
00:07:16.379  ************************************
00:07:16.379   16:17:45 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:16.379   16:17:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:07:16.379   16:17:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:16.379   16:17:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:16.379   16:17:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:16.379   16:17:45 event -- common/autotest_common.sh@10 -- # set +x
00:07:16.379  ************************************
00:07:16.379  START TEST event_reactor_perf
00:07:16.379  ************************************
00:07:16.379   16:17:45 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:16.379  [2024-12-09 16:17:45.487425] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:16.379  [2024-12-09 16:17:45.487655] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60222 ]
00:07:16.638  [2024-12-09 16:17:45.668565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:16.638  [2024-12-09 16:17:45.786391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:18.018  test_start
00:07:18.018  test_end
00:07:18.018  Performance:   385576 events per second
00:07:18.018  
00:07:18.018  real	0m1.577s
00:07:18.018  user	0m1.360s
00:07:18.018  sys	0m0.106s
00:07:18.018   16:17:47 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:18.018   16:17:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:07:18.018  ************************************
00:07:18.018  END TEST event_reactor_perf
00:07:18.018  ************************************
00:07:18.018    16:17:47 event -- event/event.sh@49 -- # uname -s
00:07:18.018   16:17:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:07:18.018   16:17:47 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:07:18.018   16:17:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:18.018   16:17:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:18.018   16:17:47 event -- common/autotest_common.sh@10 -- # set +x
00:07:18.018  ************************************
00:07:18.018  START TEST event_scheduler
00:07:18.018  ************************************
00:07:18.018   16:17:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:07:18.278  * Looking for test storage...
00:07:18.278  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:07:18.278    16:17:47 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:18.278     16:17:47 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version
00:07:18.278     16:17:47 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:18.278    16:17:47 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:18.278     16:17:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:07:18.278     16:17:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:07:18.278     16:17:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:18.278     16:17:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:07:18.278     16:17:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:07:18.278     16:17:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:07:18.278     16:17:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:18.278     16:17:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:18.278    16:17:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:07:18.278    16:17:47 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:18.278    16:17:47 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:18.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.278  		--rc genhtml_branch_coverage=1
00:07:18.278  		--rc genhtml_function_coverage=1
00:07:18.278  		--rc genhtml_legend=1
00:07:18.278  		--rc geninfo_all_blocks=1
00:07:18.278  		--rc geninfo_unexecuted_blocks=1
00:07:18.278  		
00:07:18.278  		'
00:07:18.278    16:17:47 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:18.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.278  		--rc genhtml_branch_coverage=1
00:07:18.278  		--rc genhtml_function_coverage=1
00:07:18.278  		--rc genhtml_legend=1
00:07:18.278  		--rc geninfo_all_blocks=1
00:07:18.278  		--rc geninfo_unexecuted_blocks=1
00:07:18.278  		
00:07:18.278  		'
00:07:18.278    16:17:47 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:18.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.278  		--rc genhtml_branch_coverage=1
00:07:18.278  		--rc genhtml_function_coverage=1
00:07:18.278  		--rc genhtml_legend=1
00:07:18.278  		--rc geninfo_all_blocks=1
00:07:18.278  		--rc geninfo_unexecuted_blocks=1
00:07:18.278  		
00:07:18.278  		'
00:07:18.278    16:17:47 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:18.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.278  		--rc genhtml_branch_coverage=1
00:07:18.279  		--rc genhtml_function_coverage=1
00:07:18.279  		--rc genhtml_legend=1
00:07:18.279  		--rc geninfo_all_blocks=1
00:07:18.279  		--rc geninfo_unexecuted_blocks=1
00:07:18.279  		
00:07:18.279  		'
00:07:18.279   16:17:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:07:18.279   16:17:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60298
00:07:18.279   16:17:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:07:18.279   16:17:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:07:18.279   16:17:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60298
00:07:18.279   16:17:47 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60298 ']'
00:07:18.279   16:17:47 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:18.279   16:17:47 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:18.279   16:17:47 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:18.279  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:18.279   16:17:47 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:18.279   16:17:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:18.279  [2024-12-09 16:17:47.417439] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:18.279  [2024-12-09 16:17:47.417569] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60298 ]
00:07:18.538  [2024-12-09 16:17:47.588302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:18.538  [2024-12-09 16:17:47.705863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:18.538  [2024-12-09 16:17:47.706070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:18.538  [2024-12-09 16:17:47.706252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:18.538  [2024-12-09 16:17:47.706295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:19.107   16:17:48 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:19.107   16:17:48 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:07:19.107   16:17:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:07:19.107   16:17:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.107   16:17:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:19.107  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:19.107  POWER: Cannot set governor of lcore 0 to userspace
00:07:19.107  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:19.107  POWER: Cannot set governor of lcore 0 to performance
00:07:19.107  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:19.107  POWER: Cannot set governor of lcore 0 to userspace
00:07:19.107  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:19.107  POWER: Cannot set governor of lcore 0 to userspace
00:07:19.107  GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0
00:07:19.107  GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:07:19.107  POWER: Unable to set Power Management Environment for lcore 0
00:07:19.107  [2024-12-09 16:17:48.259044] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0
00:07:19.107  [2024-12-09 16:17:48.259068] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0
00:07:19.107  [2024-12-09 16:17:48.259080] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:07:19.107  [2024-12-09 16:17:48.259102] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:07:19.107  [2024-12-09 16:17:48.259113] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:07:19.107  [2024-12-09 16:17:48.259126] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:07:19.107   16:17:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.107   16:17:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:07:19.107   16:17:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.107   16:17:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:19.677  [2024-12-09 16:17:48.588717] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:07:19.677   16:17:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677   16:17:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:07:19.677   16:17:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:19.677   16:17:48 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:19.677   16:17:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:19.677  ************************************
00:07:19.677  START TEST scheduler_create_thread
00:07:19.677  ************************************
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:19.677  2
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:19.677  3
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:19.677  4
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:19.677  5
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:19.677  6
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:19.677  7
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:19.677  8
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:19.677  9
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:19.677  10
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677    16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:07:19.677    16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677    16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:19.677    16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:19.677   16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:19.677    16:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:07:19.677    16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:19.677    16:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:20.616    16:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:20.616   16:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:07:20.616   16:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:07:20.616   16:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:20.616   16:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:21.997  ************************************
00:07:21.997  END TEST scheduler_create_thread
00:07:21.997  ************************************
00:07:21.997   16:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:21.997  
00:07:21.997  real	0m2.136s
00:07:21.997  user	0m0.021s
00:07:21.997  sys	0m0.011s
00:07:21.997   16:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:21.997   16:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:21.997   16:17:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:07:21.997   16:17:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60298
00:07:21.997   16:17:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60298 ']'
00:07:21.997   16:17:50 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60298
00:07:21.997    16:17:50 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:07:21.997   16:17:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:21.997    16:17:50 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60298
00:07:21.997  killing process with pid 60298
00:07:21.997   16:17:50 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:07:21.997   16:17:50 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:07:21.997   16:17:50 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60298'
00:07:21.997   16:17:50 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60298
00:07:21.997   16:17:50 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60298
00:07:22.256  [2024-12-09 16:17:51.221430] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:07:23.637  
00:07:23.637  real	0m5.284s
00:07:23.637  user	0m8.741s
00:07:23.637  sys	0m0.549s
00:07:23.637  ************************************
00:07:23.637  END TEST event_scheduler
00:07:23.637  ************************************
00:07:23.637   16:17:52 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:23.637   16:17:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:23.637   16:17:52 event -- event/event.sh@51 -- # modprobe -n nbd
00:07:23.637   16:17:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:07:23.637   16:17:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:23.637   16:17:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:23.637   16:17:52 event -- common/autotest_common.sh@10 -- # set +x
00:07:23.637  ************************************
00:07:23.637  START TEST app_repeat
00:07:23.637  ************************************
00:07:23.637   16:17:52 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60404
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:07:23.637  Process app_repeat pid: 60404
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60404'
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:23.637  spdk_app_start Round 0
00:07:23.637   16:17:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:07:23.638   16:17:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60404 /var/tmp/spdk-nbd.sock
00:07:23.638   16:17:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60404 ']'
00:07:23.638   16:17:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:23.638   16:17:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:23.638  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:23.638   16:17:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:23.638   16:17:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:23.638   16:17:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:23.638  [2024-12-09 16:17:52.535576] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:23.638  [2024-12-09 16:17:52.535711] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60404 ]
00:07:23.638  [2024-12-09 16:17:52.718639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:23.897  [2024-12-09 16:17:52.831043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:23.897  [2024-12-09 16:17:52.831073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:24.466   16:17:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:24.466   16:17:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:24.466   16:17:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:24.725  Malloc0
00:07:24.725   16:17:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:24.986  Malloc1
00:07:24.986   16:17:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:24.986   16:17:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:25.246  /dev/nbd0
00:07:25.246    16:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:25.246   16:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:25.246  1+0 records in
00:07:25.246  1+0 records out
00:07:25.246  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536864 s, 7.6 MB/s
00:07:25.246    16:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:25.246   16:17:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:25.246   16:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:25.246   16:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:25.246   16:17:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:25.506  /dev/nbd1
00:07:25.506    16:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:25.506   16:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:25.506  1+0 records in
00:07:25.506  1+0 records out
00:07:25.506  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375642 s, 10.9 MB/s
00:07:25.506    16:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:25.506   16:17:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:25.506   16:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:25.506   16:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:25.506    16:17:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:25.506    16:17:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:25.506     16:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:25.765    16:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:25.765    {
00:07:25.765      "nbd_device": "/dev/nbd0",
00:07:25.765      "bdev_name": "Malloc0"
00:07:25.765    },
00:07:25.765    {
00:07:25.765      "nbd_device": "/dev/nbd1",
00:07:25.765      "bdev_name": "Malloc1"
00:07:25.765    }
00:07:25.765  ]'
00:07:25.765     16:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:25.765    {
00:07:25.765      "nbd_device": "/dev/nbd0",
00:07:25.765      "bdev_name": "Malloc0"
00:07:25.765    },
00:07:25.765    {
00:07:25.765      "nbd_device": "/dev/nbd1",
00:07:25.765      "bdev_name": "Malloc1"
00:07:25.765    }
00:07:25.765  ]'
00:07:25.765     16:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:25.765    16:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:25.765  /dev/nbd1'
00:07:25.765     16:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:25.765  /dev/nbd1'
00:07:25.765     16:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:25.765    16:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:25.765    16:17:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:25.765   16:17:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:25.765   16:17:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:25.765   16:17:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:25.765   16:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:25.765   16:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:25.765   16:17:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:25.765   16:17:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:25.765   16:17:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:25.765   16:17:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:25.765  256+0 records in
00:07:25.765  256+0 records out
00:07:25.766  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00567391 s, 185 MB/s
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:25.766  256+0 records in
00:07:25.766  256+0 records out
00:07:25.766  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299207 s, 35.0 MB/s
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:25.766  256+0 records in
00:07:25.766  256+0 records out
00:07:25.766  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311805 s, 33.6 MB/s
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:25.766   16:17:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:26.031    16:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:26.031   16:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:26.031   16:17:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:26.031   16:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:26.031   16:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:26.031   16:17:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:26.031   16:17:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:26.031   16:17:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:26.031   16:17:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:26.031   16:17:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:26.291    16:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:26.291   16:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:26.291   16:17:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:26.291   16:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:26.291   16:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:26.291   16:17:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:26.291   16:17:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:26.291   16:17:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:26.291    16:17:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:26.291    16:17:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:26.292     16:17:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:26.551    16:17:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:26.551     16:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:26.551     16:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:26.551    16:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:26.551     16:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:26.551     16:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:26.551     16:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:26.551    16:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:26.551    16:17:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:26.551   16:17:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:26.551   16:17:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:26.551   16:17:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:26.551   16:17:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:27.118   16:17:56 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:28.056  [2024-12-09 16:17:57.188567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:28.315  [2024-12-09 16:17:57.298161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:28.315  [2024-12-09 16:17:57.298161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:28.574  [2024-12-09 16:17:57.493203] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:28.574  [2024-12-09 16:17:57.493293] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:30.052   16:17:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:30.052  spdk_app_start Round 1
00:07:30.052   16:17:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:07:30.052   16:17:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60404 /var/tmp/spdk-nbd.sock
00:07:30.052   16:17:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60404 ']'
00:07:30.052   16:17:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:30.052   16:17:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:30.052  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:30.052   16:17:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:30.052   16:17:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:30.052   16:17:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:30.311   16:17:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:30.311   16:17:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:30.311   16:17:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:30.311  Malloc0
00:07:30.570   16:17:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:30.829  Malloc1
00:07:30.829   16:17:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:30.829   16:17:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:30.829  /dev/nbd0
00:07:30.829    16:17:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:30.829   16:18:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:30.829   16:18:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:31.194  1+0 records in
00:07:31.194  1+0 records out
00:07:31.194  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214329 s, 19.1 MB/s
00:07:31.194    16:18:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:31.194   16:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:31.194   16:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:31.194   16:18:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:31.194  /dev/nbd1
00:07:31.194    16:18:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:31.194   16:18:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:31.194   16:18:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:31.453  1+0 records in
00:07:31.453  1+0 records out
00:07:31.453  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384301 s, 10.7 MB/s
00:07:31.453    16:18:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:31.453   16:18:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:31.453   16:18:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:31.453   16:18:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:31.453   16:18:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:31.453   16:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:31.453   16:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:31.453    16:18:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:31.453    16:18:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:31.453     16:18:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:31.453    16:18:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:31.453    {
00:07:31.453      "nbd_device": "/dev/nbd0",
00:07:31.453      "bdev_name": "Malloc0"
00:07:31.453    },
00:07:31.453    {
00:07:31.453      "nbd_device": "/dev/nbd1",
00:07:31.453      "bdev_name": "Malloc1"
00:07:31.453    }
00:07:31.453  ]'
00:07:31.453     16:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:31.453    {
00:07:31.453      "nbd_device": "/dev/nbd0",
00:07:31.453      "bdev_name": "Malloc0"
00:07:31.454    },
00:07:31.454    {
00:07:31.454      "nbd_device": "/dev/nbd1",
00:07:31.454      "bdev_name": "Malloc1"
00:07:31.454    }
00:07:31.454  ]'
00:07:31.454     16:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:31.454    16:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:31.454  /dev/nbd1'
00:07:31.454     16:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:31.454  /dev/nbd1'
00:07:31.454     16:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:31.454    16:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:31.454    16:18:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:31.454   16:18:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:31.454   16:18:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:31.454   16:18:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:31.454   16:18:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:31.454   16:18:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:31.454   16:18:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:31.454   16:18:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:31.454   16:18:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:31.454   16:18:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:31.454  256+0 records in
00:07:31.454  256+0 records out
00:07:31.454  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105088 s, 99.8 MB/s
00:07:31.454   16:18:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:31.454   16:18:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:31.713  256+0 records in
00:07:31.713  256+0 records out
00:07:31.713  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282982 s, 37.1 MB/s
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:31.713  256+0 records in
00:07:31.713  256+0 records out
00:07:31.713  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317973 s, 33.0 MB/s
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:31.713   16:18:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:31.973    16:18:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:31.973   16:18:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:31.973   16:18:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:31.973   16:18:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:31.973   16:18:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:31.973   16:18:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:31.973   16:18:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:31.973   16:18:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:31.973   16:18:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:31.973   16:18:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:31.973    16:18:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:31.973   16:18:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:31.973   16:18:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:31.973   16:18:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:31.973   16:18:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:31.973   16:18:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:31.973   16:18:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:31.973   16:18:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:31.973    16:18:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:31.973    16:18:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:31.973     16:18:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:32.232    16:18:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:32.232     16:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:32.232     16:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:32.232    16:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:32.232     16:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:32.232     16:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:32.232     16:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:32.232    16:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:32.232    16:18:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:32.232   16:18:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:32.232   16:18:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:32.232   16:18:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:32.232   16:18:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:32.798   16:18:01 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:34.176  [2024-12-09 16:18:02.973473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:34.176  [2024-12-09 16:18:03.087825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:34.176  [2024-12-09 16:18:03.087847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:34.176  [2024-12-09 16:18:03.284576] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:34.176  [2024-12-09 16:18:03.284648] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:36.086  spdk_app_start Round 2
00:07:36.086  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:36.086   16:18:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:36.086   16:18:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:07:36.086   16:18:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60404 /var/tmp/spdk-nbd.sock
00:07:36.086   16:18:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60404 ']'
00:07:36.086   16:18:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:36.086   16:18:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:36.086   16:18:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:36.086   16:18:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:36.086   16:18:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:36.086   16:18:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:36.086   16:18:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:36.086   16:18:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:36.347  Malloc0
00:07:36.347   16:18:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:36.606  Malloc1
00:07:36.606   16:18:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:36.606   16:18:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:36.865  /dev/nbd0
00:07:36.865    16:18:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:36.865   16:18:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:36.866  1+0 records in
00:07:36.866  1+0 records out
00:07:36.866  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351271 s, 11.7 MB/s
00:07:36.866    16:18:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:36.866   16:18:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:36.866   16:18:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:36.866   16:18:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:36.866   16:18:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:37.125  /dev/nbd1
00:07:37.125    16:18:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:37.125   16:18:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:37.125  1+0 records in
00:07:37.125  1+0 records out
00:07:37.125  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307493 s, 13.3 MB/s
00:07:37.125    16:18:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:37.125   16:18:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:37.125   16:18:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:37.125   16:18:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:37.125    16:18:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:37.125    16:18:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:37.125     16:18:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:37.385    16:18:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:37.385    {
00:07:37.385      "nbd_device": "/dev/nbd0",
00:07:37.385      "bdev_name": "Malloc0"
00:07:37.385    },
00:07:37.385    {
00:07:37.385      "nbd_device": "/dev/nbd1",
00:07:37.385      "bdev_name": "Malloc1"
00:07:37.385    }
00:07:37.385  ]'
00:07:37.385     16:18:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:37.385    {
00:07:37.385      "nbd_device": "/dev/nbd0",
00:07:37.385      "bdev_name": "Malloc0"
00:07:37.385    },
00:07:37.385    {
00:07:37.385      "nbd_device": "/dev/nbd1",
00:07:37.385      "bdev_name": "Malloc1"
00:07:37.385    }
00:07:37.385  ]'
00:07:37.385     16:18:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:37.385    16:18:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:37.385  /dev/nbd1'
00:07:37.385     16:18:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:37.385  /dev/nbd1'
00:07:37.385     16:18:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:37.385    16:18:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:37.385    16:18:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:37.385  256+0 records in
00:07:37.385  256+0 records out
00:07:37.385  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128934 s, 81.3 MB/s
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:37.385  256+0 records in
00:07:37.385  256+0 records out
00:07:37.385  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284895 s, 36.8 MB/s
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:37.385   16:18:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:37.385  256+0 records in
00:07:37.385  256+0 records out
00:07:37.385  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305664 s, 34.3 MB/s
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:37.386   16:18:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:37.645    16:18:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:37.645   16:18:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:37.645   16:18:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:37.645   16:18:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:37.645   16:18:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:37.645   16:18:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:37.645   16:18:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:37.645   16:18:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:37.645   16:18:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:37.645   16:18:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:37.905    16:18:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:37.905   16:18:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:37.905   16:18:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:37.905   16:18:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:37.905   16:18:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:37.905   16:18:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:37.905   16:18:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:37.905   16:18:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:37.905    16:18:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:37.905    16:18:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:37.905     16:18:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:38.165    16:18:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:38.165     16:18:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:38.165     16:18:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:38.165    16:18:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:38.165     16:18:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:38.165     16:18:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:38.165     16:18:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:38.165    16:18:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:38.165    16:18:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:38.165   16:18:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:38.165   16:18:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:38.165   16:18:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:38.165   16:18:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:38.734   16:18:07 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:39.671  [2024-12-09 16:18:08.759162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:39.930  [2024-12-09 16:18:08.868429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:39.930  [2024-12-09 16:18:08.868428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:39.930  [2024-12-09 16:18:09.060926] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:39.930  [2024-12-09 16:18:09.060998] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:41.837  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:41.837   16:18:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60404 /var/tmp/spdk-nbd.sock
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60404 ']'
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:41.837   16:18:10 event.app_repeat -- event/event.sh@39 -- # killprocess 60404
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60404 ']'
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60404
00:07:41.837    16:18:10 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:41.837    16:18:10 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60404
00:07:41.837  killing process with pid 60404
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60404'
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60404
00:07:41.837   16:18:10 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60404
00:07:42.775  spdk_app_start is called in Round 0.
00:07:42.775  Shutdown signal received, stop current app iteration
00:07:42.775  Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 reinitialization...
00:07:42.775  spdk_app_start is called in Round 1.
00:07:42.775  Shutdown signal received, stop current app iteration
00:07:42.775  Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 reinitialization...
00:07:42.775  spdk_app_start is called in Round 2.
00:07:42.775  Shutdown signal received, stop current app iteration
00:07:42.775  Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 reinitialization...
00:07:42.775  spdk_app_start is called in Round 3.
00:07:42.775  Shutdown signal received, stop current app iteration
00:07:43.035   16:18:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:07:43.035   16:18:11 event.app_repeat -- event/event.sh@42 -- # return 0
00:07:43.035  
00:07:43.035  real	0m19.509s
00:07:43.035  user	0m41.504s
00:07:43.035  sys	0m3.195s
00:07:43.035   16:18:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:43.035   16:18:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:43.035  ************************************
00:07:43.035  END TEST app_repeat
00:07:43.035  ************************************
00:07:43.035   16:18:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:07:43.035   16:18:12 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:07:43.035   16:18:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:43.035   16:18:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:43.035   16:18:12 event -- common/autotest_common.sh@10 -- # set +x
00:07:43.035  ************************************
00:07:43.035  START TEST cpu_locks
00:07:43.035  ************************************
00:07:43.035   16:18:12 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:07:43.035  * Looking for test storage...
00:07:43.035  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:07:43.035    16:18:12 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:43.035     16:18:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version
00:07:43.035     16:18:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:43.294    16:18:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:43.294    16:18:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:43.294    16:18:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:43.294    16:18:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:43.294    16:18:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:07:43.294    16:18:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:07:43.294    16:18:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:07:43.294    16:18:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:43.295     16:18:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:07:43.295     16:18:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:07:43.295     16:18:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:43.295     16:18:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:07:43.295     16:18:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:07:43.295     16:18:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:07:43.295     16:18:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:43.295     16:18:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:43.295    16:18:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:07:43.295    16:18:12 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:43.295    16:18:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:43.295  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:43.295  		--rc genhtml_branch_coverage=1
00:07:43.295  		--rc genhtml_function_coverage=1
00:07:43.295  		--rc genhtml_legend=1
00:07:43.295  		--rc geninfo_all_blocks=1
00:07:43.295  		--rc geninfo_unexecuted_blocks=1
00:07:43.295  		
00:07:43.295  		'
00:07:43.295    16:18:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:43.295  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:43.295  		--rc genhtml_branch_coverage=1
00:07:43.295  		--rc genhtml_function_coverage=1
00:07:43.295  		--rc genhtml_legend=1
00:07:43.295  		--rc geninfo_all_blocks=1
00:07:43.295  		--rc geninfo_unexecuted_blocks=1
00:07:43.295  		
00:07:43.295  		'
00:07:43.295    16:18:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:43.295  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:43.295  		--rc genhtml_branch_coverage=1
00:07:43.295  		--rc genhtml_function_coverage=1
00:07:43.295  		--rc genhtml_legend=1
00:07:43.295  		--rc geninfo_all_blocks=1
00:07:43.295  		--rc geninfo_unexecuted_blocks=1
00:07:43.295  		
00:07:43.295  		'
00:07:43.295    16:18:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:43.295  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:43.295  		--rc genhtml_branch_coverage=1
00:07:43.295  		--rc genhtml_function_coverage=1
00:07:43.295  		--rc genhtml_legend=1
00:07:43.295  		--rc geninfo_all_blocks=1
00:07:43.295  		--rc geninfo_unexecuted_blocks=1
00:07:43.295  		
00:07:43.295  		'
00:07:43.295   16:18:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:07:43.295   16:18:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:07:43.295   16:18:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:07:43.295   16:18:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:07:43.295   16:18:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:43.295   16:18:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:43.295   16:18:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:43.295  ************************************
00:07:43.295  START TEST default_locks
00:07:43.295  ************************************
00:07:43.295   16:18:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:07:43.295   16:18:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60851
00:07:43.295   16:18:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:43.295   16:18:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60851
00:07:43.295   16:18:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60851 ']'
00:07:43.295   16:18:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:43.295   16:18:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:43.295  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:43.295   16:18:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:43.295   16:18:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:43.295   16:18:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:43.295  [2024-12-09 16:18:12.394617] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:43.295  [2024-12-09 16:18:12.394738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60851 ]
00:07:43.554  [2024-12-09 16:18:12.563340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:43.554  [2024-12-09 16:18:12.678206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:44.520   16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:44.520   16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:07:44.520   16:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60851
00:07:44.520   16:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60851
00:07:44.520   16:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:44.780   16:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60851
00:07:44.780   16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60851 ']'
00:07:44.780   16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60851
00:07:44.780    16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:07:44.780   16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:45.040    16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60851
00:07:45.040   16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:45.040  killing process with pid 60851
00:07:45.040   16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:45.040   16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60851'
00:07:45.040   16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60851
00:07:45.040   16:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60851
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60851
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60851
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:47.577    16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60851
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60851 ']'
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:47.577  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:47.577  ERROR: process (pid: 60851) is no longer running
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:47.577  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60851) - No such process
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:47.577  
00:07:47.577  real	0m4.105s
00:07:47.577  user	0m4.063s
00:07:47.577  sys	0m0.652s
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:47.577   16:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:47.577  ************************************
00:07:47.577  END TEST default_locks
00:07:47.577  ************************************
00:07:47.577   16:18:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:07:47.577   16:18:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:47.577   16:18:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:47.577   16:18:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:47.577  ************************************
00:07:47.577  START TEST default_locks_via_rpc
00:07:47.577  ************************************
00:07:47.577   16:18:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:07:47.577   16:18:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60928
00:07:47.577   16:18:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:47.577   16:18:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60928
00:07:47.577   16:18:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60928 ']'
00:07:47.577   16:18:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:47.577   16:18:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:47.577  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:47.578   16:18:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:47.578   16:18:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:47.578   16:18:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:47.578  [2024-12-09 16:18:16.563499] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:47.578  [2024-12-09 16:18:16.563630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60928 ]
00:07:47.578  [2024-12-09 16:18:16.745944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:47.837  [2024-12-09 16:18:16.860875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60928
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60928
00:07:48.775   16:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:49.343   16:18:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60928
00:07:49.343   16:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60928 ']'
00:07:49.343   16:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60928
00:07:49.343    16:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:07:49.343   16:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:49.343    16:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60928
00:07:49.343   16:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:49.343   16:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:49.343  killing process with pid 60928
00:07:49.343   16:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60928'
00:07:49.343   16:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60928
00:07:49.343   16:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60928
00:07:51.879  
00:07:51.879  real	0m4.237s
00:07:51.879  user	0m4.199s
00:07:51.879  sys	0m0.695s
00:07:51.879   16:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:51.879   16:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:51.879  ************************************
00:07:51.879  END TEST default_locks_via_rpc
00:07:51.879  ************************************
00:07:51.879   16:18:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:07:51.879   16:18:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:51.879   16:18:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:51.879   16:18:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:51.879  ************************************
00:07:51.879  START TEST non_locking_app_on_locked_coremask
00:07:51.879  ************************************
00:07:51.879   16:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:07:51.879   16:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61002
00:07:51.879   16:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:51.879   16:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61002 /var/tmp/spdk.sock
00:07:51.879   16:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61002 ']'
00:07:51.879   16:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:51.879   16:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:51.879  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:51.879   16:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:51.880   16:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:51.880   16:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:51.880  [2024-12-09 16:18:20.873830] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:51.880  [2024-12-09 16:18:20.873977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61002 ]
00:07:51.880  [2024-12-09 16:18:21.052037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:52.139  [2024-12-09 16:18:21.163209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:53.077   16:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:53.077   16:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:53.077   16:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61018
00:07:53.077   16:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61018 /var/tmp/spdk2.sock
00:07:53.077   16:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:07:53.077   16:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61018 ']'
00:07:53.077   16:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:53.077   16:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:53.077  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:53.077   16:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:53.077   16:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:53.077   16:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:53.077  [2024-12-09 16:18:22.119288] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:07:53.077  [2024-12-09 16:18:22.119406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61018 ]
00:07:53.336  [2024-12-09 16:18:22.300423] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:53.336  [2024-12-09 16:18:22.300478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:53.595  [2024-12-09 16:18:22.539779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:55.502   16:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:55.502   16:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:55.502   16:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61002
00:07:55.502   16:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61002
00:07:55.502   16:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:56.439   16:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61002
00:07:56.439   16:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61002 ']'
00:07:56.439   16:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61002
00:07:56.439    16:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:56.439   16:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:56.439    16:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61002
00:07:56.439   16:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:56.439  killing process with pid 61002
00:07:56.439   16:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:56.439   16:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61002'
00:07:56.439   16:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61002
00:07:56.439   16:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61002
00:08:01.717   16:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61018
00:08:01.717   16:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61018 ']'
00:08:01.717   16:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61018
00:08:01.717    16:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:08:01.717   16:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:01.717    16:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61018
00:08:01.717   16:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:01.717   16:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:01.717  killing process with pid 61018
00:08:01.717   16:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61018'
00:08:01.717   16:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61018
00:08:01.717   16:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61018
00:08:03.624  
00:08:03.624  real	0m11.903s
00:08:03.624  user	0m12.154s
00:08:03.624  sys	0m1.456s
00:08:03.624   16:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:03.624   16:18:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:03.624  ************************************
00:08:03.624  END TEST non_locking_app_on_locked_coremask
00:08:03.624  ************************************
00:08:03.624   16:18:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:08:03.624   16:18:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:03.624   16:18:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:03.624   16:18:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:03.624  ************************************
00:08:03.624  START TEST locking_app_on_unlocked_coremask
00:08:03.624  ************************************
00:08:03.624   16:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:08:03.624   16:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61177
00:08:03.624   16:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:08:03.624   16:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61177 /var/tmp/spdk.sock
00:08:03.624   16:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61177 ']'
00:08:03.624   16:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:03.624   16:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:03.624  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:03.624   16:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:03.624   16:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:03.624   16:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:03.884  [2024-12-09 16:18:32.855495] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:03.884  [2024-12-09 16:18:32.855623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61177 ]
00:08:03.884  [2024-12-09 16:18:33.038175] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:03.884  [2024-12-09 16:18:33.038226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:04.143  [2024-12-09 16:18:33.154616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:05.079   16:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:05.079   16:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:08:05.079   16:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61193
00:08:05.079   16:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61193 /var/tmp/spdk2.sock
00:08:05.079   16:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:08:05.079   16:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61193 ']'
00:08:05.079   16:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:05.079   16:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:05.079  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:05.079   16:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:05.079   16:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:05.079   16:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:05.079  [2024-12-09 16:18:34.129054] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:05.079  [2024-12-09 16:18:34.129177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61193 ]
00:08:05.338  [2024-12-09 16:18:34.312719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:05.596  [2024-12-09 16:18:34.539590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:07.500   16:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:07.500   16:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:08:07.500   16:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61193
00:08:07.500   16:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61193
00:08:07.500   16:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:08.438   16:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61177
00:08:08.438   16:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61177 ']'
00:08:08.438   16:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61177
00:08:08.438    16:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:08:08.438   16:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:08.438    16:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61177
00:08:08.438   16:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:08.438   16:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:08.438   16:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61177'
00:08:08.438  killing process with pid 61177
00:08:08.438   16:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61177
00:08:08.438   16:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61177
00:08:13.711   16:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61193
00:08:13.711   16:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61193 ']'
00:08:13.711   16:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61193
00:08:13.711    16:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:08:13.711   16:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:13.711    16:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61193
00:08:13.711  killing process with pid 61193
00:08:13.711   16:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:13.711   16:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:13.711   16:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61193'
00:08:13.711   16:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61193
00:08:13.711   16:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61193
00:08:15.671  
00:08:15.671  real	0m11.937s
00:08:15.671  user	0m12.247s
00:08:15.671  sys	0m1.352s
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:15.672  ************************************
00:08:15.672  END TEST locking_app_on_unlocked_coremask
00:08:15.672  ************************************
00:08:15.672   16:18:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:08:15.672   16:18:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:15.672   16:18:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:15.672   16:18:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:15.672  ************************************
00:08:15.672  START TEST locking_app_on_locked_coremask
00:08:15.672  ************************************
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61341
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61341 /var/tmp/spdk.sock
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61341 ']'
00:08:15.672  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:15.672   16:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:15.931  [2024-12-09 16:18:44.909269] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:15.931  [2024-12-09 16:18:44.909452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61341 ]
00:08:16.190  [2024-12-09 16:18:45.117753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:16.190  [2024-12-09 16:18:45.236061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61368
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61368 /var/tmp/spdk2.sock
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61368 /var/tmp/spdk2.sock
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:17.128    16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61368 /var/tmp/spdk2.sock
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61368 ']'
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:17.128  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:17.128   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:17.128  [2024-12-09 16:18:46.238300] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:17.128  [2024-12-09 16:18:46.238430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61368 ]
00:08:17.388  [2024-12-09 16:18:46.421784] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61341 has claimed it.
00:08:17.388  [2024-12-09 16:18:46.421868] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:08:17.957  ERROR: process (pid: 61368) is no longer running
00:08:17.957  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61368) - No such process
00:08:17.957   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:17.957   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:08:17.957   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:08:17.957   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:17.957   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:17.957   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:17.957   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61341
00:08:17.957   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61341
00:08:17.957   16:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:18.217   16:18:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61341
00:08:18.217   16:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61341 ']'
00:08:18.217   16:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61341
00:08:18.217    16:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:08:18.217   16:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:18.217    16:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61341
00:08:18.217   16:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:18.217  killing process with pid 61341
00:08:18.217   16:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:18.217   16:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61341'
00:08:18.217   16:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61341
00:08:18.217   16:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61341
00:08:20.753  ************************************
00:08:20.753  END TEST locking_app_on_locked_coremask
00:08:20.753  ************************************
00:08:20.753  
00:08:20.753  real	0m4.988s
00:08:20.753  user	0m5.174s
00:08:20.753  sys	0m0.877s
00:08:20.753   16:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:20.753   16:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:20.753   16:18:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:08:20.753   16:18:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:20.753   16:18:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:20.753   16:18:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:20.754  ************************************
00:08:20.754  START TEST locking_overlapped_coremask
00:08:20.754  ************************************
00:08:20.754   16:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:08:20.754   16:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61432
00:08:20.754   16:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:08:20.754   16:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61432 /var/tmp/spdk.sock
00:08:20.754   16:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61432 ']'
00:08:20.754   16:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:20.754  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:20.754   16:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:20.754   16:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:20.754   16:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:20.754   16:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:21.013  [2024-12-09 16:18:49.931863] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:21.013  [2024-12-09 16:18:49.932000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61432 ]
00:08:21.013  [2024-12-09 16:18:50.116178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:21.273  [2024-12-09 16:18:50.241537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:21.273  [2024-12-09 16:18:50.241674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:21.273  [2024-12-09 16:18:50.241703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:22.210   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:22.210   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:08:22.210   16:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61456
00:08:22.210   16:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61456 /var/tmp/spdk2.sock
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61456 /var/tmp/spdk2.sock
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:22.211    16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61456 /var/tmp/spdk2.sock
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61456 ']'
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:22.211  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:22.211   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:22.211  [2024-12-09 16:18:51.266842] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:22.211  [2024-12-09 16:18:51.267169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61456 ]
00:08:22.470  [2024-12-09 16:18:51.451676] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61432 has claimed it.
00:08:22.470  [2024-12-09 16:18:51.451760] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:08:22.729  ERROR: process (pid: 61456) is no longer running
00:08:22.729  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61456) - No such process
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61432
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61432 ']'
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61432
00:08:22.729    16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:08:22.729   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:22.729    16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61432
00:08:22.988   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:22.988   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:22.988   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61432'
00:08:22.988  killing process with pid 61432
00:08:22.988   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61432
00:08:22.988   16:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61432
00:08:25.593  
00:08:25.593  real	0m4.577s
00:08:25.593  user	0m12.354s
00:08:25.593  sys	0m0.663s
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:25.593  ************************************
00:08:25.593  END TEST locking_overlapped_coremask
00:08:25.593  ************************************
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:25.593   16:18:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:08:25.593   16:18:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:25.593   16:18:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:25.593   16:18:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:25.593  ************************************
00:08:25.593  START TEST locking_overlapped_coremask_via_rpc
00:08:25.593  ************************************
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61525
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61525 /var/tmp/spdk.sock
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61525 ']'
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:25.593  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:25.593   16:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:25.593  [2024-12-09 16:18:54.583106] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:25.593  [2024-12-09 16:18:54.583237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61525 ]
00:08:25.593  [2024-12-09 16:18:54.767181] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:25.593  [2024-12-09 16:18:54.767257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:25.852  [2024-12-09 16:18:54.889250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:25.852  [2024-12-09 16:18:54.889395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:25.852  [2024-12-09 16:18:54.889423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:26.792   16:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:26.792   16:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:08:26.792   16:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:08:26.792   16:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61543
00:08:26.792   16:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61543 /var/tmp/spdk2.sock
00:08:26.792   16:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61543 ']'
00:08:26.792   16:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:26.792   16:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:26.792   16:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:26.792  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:26.792   16:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:26.792   16:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:26.792  [2024-12-09 16:18:55.851119] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:26.792  [2024-12-09 16:18:55.851248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61543 ]
00:08:27.052  [2024-12-09 16:18:56.036902] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:27.052  [2024-12-09 16:18:56.036971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:27.311  [2024-12-09 16:18:56.263937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:08:27.311  [2024-12-09 16:18:56.267086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:27.311  [2024-12-09 16:18:56.267133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:29.848    16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:29.848   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:29.848  [2024-12-09 16:18:58.438093] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61525 has claimed it.
00:08:29.848  request:
00:08:29.848  {
00:08:29.848  "method": "framework_enable_cpumask_locks",
00:08:29.848  "req_id": 1
00:08:29.849  }
00:08:29.849  Got JSON-RPC error response
00:08:29.849  response:
00:08:29.849  {
00:08:29.849  "code": -32603,
00:08:29.849  "message": "Failed to claim CPU core: 2"
00:08:29.849  }
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61525 /var/tmp/spdk.sock
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61525 ']'
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:29.849  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61543 /var/tmp/spdk2.sock
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61543 ']'
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:29.849  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:08:29.849  
00:08:29.849  real	0m4.415s
00:08:29.849  user	0m1.259s
00:08:29.849  sys	0m0.228s
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:29.849   16:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:29.849  ************************************
00:08:29.849  END TEST locking_overlapped_coremask_via_rpc
00:08:29.849  ************************************
00:08:29.849   16:18:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:08:29.849   16:18:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61525 ]]
00:08:29.849   16:18:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61525
00:08:29.849   16:18:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61525 ']'
00:08:29.849   16:18:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61525
00:08:29.849    16:18:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:08:29.849   16:18:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:29.849    16:18:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61525
00:08:29.849  killing process with pid 61525
00:08:29.849   16:18:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:29.849   16:18:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:29.849   16:18:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61525'
00:08:29.849   16:18:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61525
00:08:29.849   16:18:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61525
00:08:32.384   16:19:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61543 ]]
00:08:32.384   16:19:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61543
00:08:32.384   16:19:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61543 ']'
00:08:32.384   16:19:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61543
00:08:32.384    16:19:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:08:32.384   16:19:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:32.384    16:19:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61543
00:08:32.384  killing process with pid 61543
00:08:32.384   16:19:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:08:32.384   16:19:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:08:32.384   16:19:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61543'
00:08:32.384   16:19:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61543
00:08:32.384   16:19:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61543
00:08:34.927   16:19:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:08:34.928   16:19:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:08:34.928   16:19:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61525 ]]
00:08:34.928  Process with pid 61525 is not found
00:08:34.928   16:19:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61525
00:08:34.928   16:19:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61525 ']'
00:08:34.928   16:19:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61525
00:08:34.928  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61525) - No such process
00:08:34.928   16:19:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61525 is not found'
00:08:34.928   16:19:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61543 ]]
00:08:34.928   16:19:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61543
00:08:34.928  Process with pid 61543 is not found
00:08:34.928   16:19:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61543 ']'
00:08:34.928   16:19:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61543
00:08:34.928  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61543) - No such process
00:08:34.928   16:19:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61543 is not found'
00:08:34.928   16:19:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:08:34.928  
00:08:34.928  real	0m51.860s
00:08:34.928  user	1m27.761s
00:08:34.928  sys	0m7.211s
00:08:34.928   16:19:03 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:34.928   16:19:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:34.928  ************************************
00:08:34.928  END TEST cpu_locks
00:08:34.928  ************************************
00:08:34.928  
00:08:34.928  real	1m22.118s
00:08:34.928  user	2m25.300s
00:08:34.928  sys	0m11.760s
00:08:34.928   16:19:03 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:34.928   16:19:03 event -- common/autotest_common.sh@10 -- # set +x
00:08:34.928  ************************************
00:08:34.928  END TEST event
00:08:34.928  ************************************
00:08:34.928   16:19:04  -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:08:34.928   16:19:04  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:34.928   16:19:04  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:34.928   16:19:04  -- common/autotest_common.sh@10 -- # set +x
00:08:34.928  ************************************
00:08:34.928  START TEST thread
00:08:34.928  ************************************
00:08:34.928   16:19:04 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:08:35.201  * Looking for test storage...
00:08:35.201  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:08:35.201    16:19:04 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:35.201     16:19:04 thread -- common/autotest_common.sh@1711 -- # lcov --version
00:08:35.201     16:19:04 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:35.201    16:19:04 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:35.201    16:19:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:35.201    16:19:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:35.202    16:19:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:35.202    16:19:04 thread -- scripts/common.sh@336 -- # IFS=.-:
00:08:35.202    16:19:04 thread -- scripts/common.sh@336 -- # read -ra ver1
00:08:35.202    16:19:04 thread -- scripts/common.sh@337 -- # IFS=.-:
00:08:35.202    16:19:04 thread -- scripts/common.sh@337 -- # read -ra ver2
00:08:35.202    16:19:04 thread -- scripts/common.sh@338 -- # local 'op=<'
00:08:35.202    16:19:04 thread -- scripts/common.sh@340 -- # ver1_l=2
00:08:35.202    16:19:04 thread -- scripts/common.sh@341 -- # ver2_l=1
00:08:35.202    16:19:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:35.202    16:19:04 thread -- scripts/common.sh@344 -- # case "$op" in
00:08:35.202    16:19:04 thread -- scripts/common.sh@345 -- # : 1
00:08:35.202    16:19:04 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:35.202    16:19:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:35.202     16:19:04 thread -- scripts/common.sh@365 -- # decimal 1
00:08:35.202     16:19:04 thread -- scripts/common.sh@353 -- # local d=1
00:08:35.202     16:19:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:35.202     16:19:04 thread -- scripts/common.sh@355 -- # echo 1
00:08:35.202    16:19:04 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:08:35.202     16:19:04 thread -- scripts/common.sh@366 -- # decimal 2
00:08:35.202     16:19:04 thread -- scripts/common.sh@353 -- # local d=2
00:08:35.202     16:19:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:35.202     16:19:04 thread -- scripts/common.sh@355 -- # echo 2
00:08:35.202    16:19:04 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:08:35.202    16:19:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:35.202    16:19:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:35.202    16:19:04 thread -- scripts/common.sh@368 -- # return 0
00:08:35.202    16:19:04 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:35.202    16:19:04 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:35.202  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.202  		--rc genhtml_branch_coverage=1
00:08:35.202  		--rc genhtml_function_coverage=1
00:08:35.202  		--rc genhtml_legend=1
00:08:35.202  		--rc geninfo_all_blocks=1
00:08:35.202  		--rc geninfo_unexecuted_blocks=1
00:08:35.202  		
00:08:35.202  		'
00:08:35.202    16:19:04 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:35.202  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.202  		--rc genhtml_branch_coverage=1
00:08:35.202  		--rc genhtml_function_coverage=1
00:08:35.202  		--rc genhtml_legend=1
00:08:35.202  		--rc geninfo_all_blocks=1
00:08:35.202  		--rc geninfo_unexecuted_blocks=1
00:08:35.202  		
00:08:35.202  		'
00:08:35.202    16:19:04 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:35.202  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.202  		--rc genhtml_branch_coverage=1
00:08:35.202  		--rc genhtml_function_coverage=1
00:08:35.202  		--rc genhtml_legend=1
00:08:35.202  		--rc geninfo_all_blocks=1
00:08:35.202  		--rc geninfo_unexecuted_blocks=1
00:08:35.202  		
00:08:35.202  		'
00:08:35.202    16:19:04 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:35.202  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.202  		--rc genhtml_branch_coverage=1
00:08:35.202  		--rc genhtml_function_coverage=1
00:08:35.202  		--rc genhtml_legend=1
00:08:35.202  		--rc geninfo_all_blocks=1
00:08:35.202  		--rc geninfo_unexecuted_blocks=1
00:08:35.202  		
00:08:35.202  		'
00:08:35.202   16:19:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:08:35.202   16:19:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:08:35.202   16:19:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:35.202   16:19:04 thread -- common/autotest_common.sh@10 -- # set +x
00:08:35.202  ************************************
00:08:35.202  START TEST thread_poller_perf
00:08:35.202  ************************************
00:08:35.202   16:19:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:08:35.202  [2024-12-09 16:19:04.336280] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:35.202  [2024-12-09 16:19:04.336504] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61738 ]
00:08:35.462  [2024-12-09 16:19:04.514340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:35.462  [2024-12-09 16:19:04.624588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:35.462  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:08:36.841  
[2024-12-09T16:19:06.020Z]  ======================================
00:08:36.841  
[2024-12-09T16:19:06.020Z]  busy:2502895898 (cyc)
00:08:36.841  
[2024-12-09T16:19:06.020Z]  total_run_count: 392000
00:08:36.841  
[2024-12-09T16:19:06.020Z]  tsc_hz: 2490000000 (cyc)
00:08:36.841  
[2024-12-09T16:19:06.020Z]  ======================================
00:08:36.841  
[2024-12-09T16:19:06.020Z]  poller_cost: 6384 (cyc), 2563 (nsec)
00:08:36.841  
00:08:36.841  real	0m1.571s
00:08:36.841  user	0m1.361s
00:08:36.841  sys	0m0.101s
00:08:36.841   16:19:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:36.841   16:19:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:08:36.841  ************************************
00:08:36.841  END TEST thread_poller_perf
00:08:36.841  ************************************
00:08:36.841   16:19:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:08:36.841   16:19:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:08:36.841   16:19:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:36.841   16:19:05 thread -- common/autotest_common.sh@10 -- # set +x
00:08:36.841  ************************************
00:08:36.841  START TEST thread_poller_perf
00:08:36.841  ************************************
00:08:36.841   16:19:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:08:36.841  [2024-12-09 16:19:05.980400] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:36.841  [2024-12-09 16:19:05.980532] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61775 ]
00:08:37.101  [2024-12-09 16:19:06.161745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:37.101  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:08:37.101  [2024-12-09 16:19:06.270890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:38.480  
[2024-12-09T16:19:07.659Z]  ======================================
00:08:38.480  
[2024-12-09T16:19:07.659Z]  busy:2493719934 (cyc)
00:08:38.480  
[2024-12-09T16:19:07.659Z]  total_run_count: 4934000
00:08:38.480  
[2024-12-09T16:19:07.659Z]  tsc_hz: 2490000000 (cyc)
00:08:38.480  
[2024-12-09T16:19:07.659Z]  ======================================
00:08:38.480  
[2024-12-09T16:19:07.659Z]  poller_cost: 505 (cyc), 202 (nsec)
00:08:38.480  
00:08:38.480  real	0m1.564s
00:08:38.480  user	0m1.353s
00:08:38.480  sys	0m0.104s
00:08:38.480   16:19:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:38.480  ************************************
00:08:38.480  END TEST thread_poller_perf
00:08:38.480  ************************************
00:08:38.480   16:19:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:08:38.480   16:19:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:08:38.480  ************************************
00:08:38.480  END TEST thread
00:08:38.480  ************************************
00:08:38.480  
00:08:38.480  real	0m3.513s
00:08:38.480  user	0m2.893s
00:08:38.480  sys	0m0.408s
00:08:38.480   16:19:07 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:38.480   16:19:07 thread -- common/autotest_common.sh@10 -- # set +x
00:08:38.480   16:19:07  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:08:38.480   16:19:07  -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:08:38.480   16:19:07  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:38.480   16:19:07  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:38.480   16:19:07  -- common/autotest_common.sh@10 -- # set +x
00:08:38.480  ************************************
00:08:38.480  START TEST app_cmdline
00:08:38.480  ************************************
00:08:38.480   16:19:07 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:08:38.739  * Looking for test storage...
00:08:38.739  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:08:38.739    16:19:07 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:38.739     16:19:07 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version
00:08:38.739     16:19:07 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:38.739    16:19:07 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@345 -- # : 1
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:38.739     16:19:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:08:38.739     16:19:07 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:08:38.739     16:19:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:38.739     16:19:07 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:08:38.739     16:19:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:08:38.739     16:19:07 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:08:38.739     16:19:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:38.739     16:19:07 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:38.739    16:19:07 app_cmdline -- scripts/common.sh@368 -- # return 0
00:08:38.739    16:19:07 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:38.739    16:19:07 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:38.739  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:38.739  		--rc genhtml_branch_coverage=1
00:08:38.739  		--rc genhtml_function_coverage=1
00:08:38.739  		--rc genhtml_legend=1
00:08:38.739  		--rc geninfo_all_blocks=1
00:08:38.739  		--rc geninfo_unexecuted_blocks=1
00:08:38.739  		
00:08:38.739  		'
00:08:38.739    16:19:07 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:38.739  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:38.739  		--rc genhtml_branch_coverage=1
00:08:38.739  		--rc genhtml_function_coverage=1
00:08:38.739  		--rc genhtml_legend=1
00:08:38.739  		--rc geninfo_all_blocks=1
00:08:38.739  		--rc geninfo_unexecuted_blocks=1
00:08:38.739  		
00:08:38.739  		'
00:08:38.739    16:19:07 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:38.739  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:38.739  		--rc genhtml_branch_coverage=1
00:08:38.739  		--rc genhtml_function_coverage=1
00:08:38.739  		--rc genhtml_legend=1
00:08:38.739  		--rc geninfo_all_blocks=1
00:08:38.739  		--rc geninfo_unexecuted_blocks=1
00:08:38.739  		
00:08:38.739  		'
00:08:38.739    16:19:07 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:38.739  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:38.739  		--rc genhtml_branch_coverage=1
00:08:38.739  		--rc genhtml_function_coverage=1
00:08:38.739  		--rc genhtml_legend=1
00:08:38.739  		--rc geninfo_all_blocks=1
00:08:38.739  		--rc geninfo_unexecuted_blocks=1
00:08:38.739  		
00:08:38.739  		'
00:08:38.739   16:19:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:08:38.739   16:19:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61864
00:08:38.739   16:19:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61864
00:08:38.739   16:19:07 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:08:38.739   16:19:07 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61864 ']'
00:08:38.739   16:19:07 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:38.739   16:19:07 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:38.739  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:38.739   16:19:07 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:38.739   16:19:07 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:38.739   16:19:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:08:38.998  [2024-12-09 16:19:07.968347] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:38.998  [2024-12-09 16:19:07.969306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61864 ]
00:08:38.998  [2024-12-09 16:19:08.144428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:39.256  [2024-12-09 16:19:08.262404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:40.193   16:19:09 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:40.193   16:19:09 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:08:40.193   16:19:09 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:08:40.193  {
00:08:40.193    "version": "SPDK v25.01-pre git sha1 6584139bf",
00:08:40.193    "fields": {
00:08:40.193      "major": 25,
00:08:40.193      "minor": 1,
00:08:40.193      "patch": 0,
00:08:40.193      "suffix": "-pre",
00:08:40.193      "commit": "6584139bf"
00:08:40.193    }
00:08:40.193  }
00:08:40.452   16:19:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:08:40.452   16:19:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:08:40.452   16:19:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:08:40.452   16:19:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:08:40.452    16:19:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:08:40.452    16:19:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:08:40.452    16:19:09 app_cmdline -- app/cmdline.sh@26 -- # sort
00:08:40.452    16:19:09 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.452    16:19:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:08:40.452    16:19:09 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.452   16:19:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:08:40.452   16:19:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:08:40.452   16:19:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:40.452   16:19:09 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:08:40.452   16:19:09 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:40.452   16:19:09 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:40.452   16:19:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:40.452    16:19:09 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:40.452   16:19:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:40.452    16:19:09 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:40.452   16:19:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:40.452   16:19:09 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:40.452   16:19:09 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:08:40.452   16:19:09 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:40.452  request:
00:08:40.452  {
00:08:40.452    "method": "env_dpdk_get_mem_stats",
00:08:40.452    "req_id": 1
00:08:40.452  }
00:08:40.452  Got JSON-RPC error response
00:08:40.452  response:
00:08:40.452  {
00:08:40.452    "code": -32601,
00:08:40.452    "message": "Method not found"
00:08:40.452  }
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:40.711   16:19:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61864
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61864 ']'
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61864
00:08:40.711    16:19:09 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:40.711    16:19:09 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61864
00:08:40.711  killing process with pid 61864
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61864'
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@973 -- # kill 61864
00:08:40.711   16:19:09 app_cmdline -- common/autotest_common.sh@978 -- # wait 61864
00:08:43.250  
00:08:43.250  real	0m4.453s
00:08:43.250  user	0m4.614s
00:08:43.250  sys	0m0.660s
00:08:43.250  ************************************
00:08:43.250  END TEST app_cmdline
00:08:43.250  ************************************
00:08:43.250   16:19:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:43.250   16:19:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:08:43.250   16:19:12  -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:08:43.250   16:19:12  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:43.250   16:19:12  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:43.250   16:19:12  -- common/autotest_common.sh@10 -- # set +x
00:08:43.250  ************************************
00:08:43.250  START TEST version
00:08:43.250  ************************************
00:08:43.250   16:19:12 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:08:43.250  * Looking for test storage...
00:08:43.250  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:08:43.250    16:19:12 version -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:43.250     16:19:12 version -- common/autotest_common.sh@1711 -- # lcov --version
00:08:43.250     16:19:12 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:43.250    16:19:12 version -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:43.250    16:19:12 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:43.250    16:19:12 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:43.250    16:19:12 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:43.250    16:19:12 version -- scripts/common.sh@336 -- # IFS=.-:
00:08:43.250    16:19:12 version -- scripts/common.sh@336 -- # read -ra ver1
00:08:43.250    16:19:12 version -- scripts/common.sh@337 -- # IFS=.-:
00:08:43.250    16:19:12 version -- scripts/common.sh@337 -- # read -ra ver2
00:08:43.250    16:19:12 version -- scripts/common.sh@338 -- # local 'op=<'
00:08:43.250    16:19:12 version -- scripts/common.sh@340 -- # ver1_l=2
00:08:43.250    16:19:12 version -- scripts/common.sh@341 -- # ver2_l=1
00:08:43.250    16:19:12 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:43.250    16:19:12 version -- scripts/common.sh@344 -- # case "$op" in
00:08:43.250    16:19:12 version -- scripts/common.sh@345 -- # : 1
00:08:43.250    16:19:12 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:43.251    16:19:12 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:43.251     16:19:12 version -- scripts/common.sh@365 -- # decimal 1
00:08:43.251     16:19:12 version -- scripts/common.sh@353 -- # local d=1
00:08:43.251     16:19:12 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:43.251     16:19:12 version -- scripts/common.sh@355 -- # echo 1
00:08:43.251    16:19:12 version -- scripts/common.sh@365 -- # ver1[v]=1
00:08:43.251     16:19:12 version -- scripts/common.sh@366 -- # decimal 2
00:08:43.251     16:19:12 version -- scripts/common.sh@353 -- # local d=2
00:08:43.251     16:19:12 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:43.251     16:19:12 version -- scripts/common.sh@355 -- # echo 2
00:08:43.251    16:19:12 version -- scripts/common.sh@366 -- # ver2[v]=2
00:08:43.251    16:19:12 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:43.251    16:19:12 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:43.251    16:19:12 version -- scripts/common.sh@368 -- # return 0
00:08:43.251    16:19:12 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:43.251    16:19:12 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:43.251  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.251  		--rc genhtml_branch_coverage=1
00:08:43.251  		--rc genhtml_function_coverage=1
00:08:43.251  		--rc genhtml_legend=1
00:08:43.251  		--rc geninfo_all_blocks=1
00:08:43.251  		--rc geninfo_unexecuted_blocks=1
00:08:43.251  		
00:08:43.251  		'
00:08:43.251    16:19:12 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:43.251  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.251  		--rc genhtml_branch_coverage=1
00:08:43.251  		--rc genhtml_function_coverage=1
00:08:43.251  		--rc genhtml_legend=1
00:08:43.251  		--rc geninfo_all_blocks=1
00:08:43.251  		--rc geninfo_unexecuted_blocks=1
00:08:43.251  		
00:08:43.251  		'
00:08:43.251    16:19:12 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:43.251  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.251  		--rc genhtml_branch_coverage=1
00:08:43.251  		--rc genhtml_function_coverage=1
00:08:43.251  		--rc genhtml_legend=1
00:08:43.251  		--rc geninfo_all_blocks=1
00:08:43.251  		--rc geninfo_unexecuted_blocks=1
00:08:43.251  		
00:08:43.251  		'
00:08:43.251    16:19:12 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:43.251  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.251  		--rc genhtml_branch_coverage=1
00:08:43.251  		--rc genhtml_function_coverage=1
00:08:43.251  		--rc genhtml_legend=1
00:08:43.251  		--rc geninfo_all_blocks=1
00:08:43.251  		--rc geninfo_unexecuted_blocks=1
00:08:43.251  		
00:08:43.251  		'
00:08:43.251    16:19:12 version -- app/version.sh@17 -- # get_header_version major
00:08:43.251    16:19:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:43.251    16:19:12 version -- app/version.sh@14 -- # cut -f2
00:08:43.251    16:19:12 version -- app/version.sh@14 -- # tr -d '"'
00:08:43.251   16:19:12 version -- app/version.sh@17 -- # major=25
00:08:43.251    16:19:12 version -- app/version.sh@18 -- # get_header_version minor
00:08:43.251    16:19:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:43.251    16:19:12 version -- app/version.sh@14 -- # cut -f2
00:08:43.251    16:19:12 version -- app/version.sh@14 -- # tr -d '"'
00:08:43.251   16:19:12 version -- app/version.sh@18 -- # minor=1
00:08:43.251    16:19:12 version -- app/version.sh@19 -- # get_header_version patch
00:08:43.251    16:19:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:43.251    16:19:12 version -- app/version.sh@14 -- # cut -f2
00:08:43.251    16:19:12 version -- app/version.sh@14 -- # tr -d '"'
00:08:43.251   16:19:12 version -- app/version.sh@19 -- # patch=0
00:08:43.251    16:19:12 version -- app/version.sh@20 -- # get_header_version suffix
00:08:43.251    16:19:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:43.251    16:19:12 version -- app/version.sh@14 -- # cut -f2
00:08:43.251    16:19:12 version -- app/version.sh@14 -- # tr -d '"'
00:08:43.510   16:19:12 version -- app/version.sh@20 -- # suffix=-pre
00:08:43.510   16:19:12 version -- app/version.sh@22 -- # version=25.1
00:08:43.510   16:19:12 version -- app/version.sh@25 -- # (( patch != 0 ))
00:08:43.510   16:19:12 version -- app/version.sh@28 -- # version=25.1rc0
00:08:43.511   16:19:12 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:08:43.511    16:19:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:08:43.511   16:19:12 version -- app/version.sh@30 -- # py_version=25.1rc0
00:08:43.511   16:19:12 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:08:43.511  ************************************
00:08:43.511  END TEST version
00:08:43.511  ************************************
00:08:43.511  
00:08:43.511  real	0m0.319s
00:08:43.511  user	0m0.185s
00:08:43.511  sys	0m0.193s
00:08:43.511   16:19:12 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:43.511   16:19:12 version -- common/autotest_common.sh@10 -- # set +x
00:08:43.511   16:19:12  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:08:43.511   16:19:12  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:08:43.511    16:19:12  -- spdk/autotest.sh@194 -- # uname -s
00:08:43.511   16:19:12  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:08:43.511   16:19:12  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:08:43.511   16:19:12  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:08:43.511   16:19:12  -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']'
00:08:43.511   16:19:12  -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:08:43.511   16:19:12  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:43.511   16:19:12  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:43.511   16:19:12  -- common/autotest_common.sh@10 -- # set +x
00:08:43.511  ************************************
00:08:43.511  START TEST blockdev_nvme
00:08:43.511  ************************************
00:08:43.511   16:19:12 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:08:43.511  * Looking for test storage...
00:08:43.771  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:08:43.771    16:19:12 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:43.771     16:19:12 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version
00:08:43.771     16:19:12 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:43.771    16:19:12 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-:
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-:
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<'
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@345 -- # : 1
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:43.771     16:19:12 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1
00:08:43.771     16:19:12 blockdev_nvme -- scripts/common.sh@353 -- # local d=1
00:08:43.771     16:19:12 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:43.771     16:19:12 blockdev_nvme -- scripts/common.sh@355 -- # echo 1
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:08:43.771     16:19:12 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2
00:08:43.771     16:19:12 blockdev_nvme -- scripts/common.sh@353 -- # local d=2
00:08:43.771     16:19:12 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:43.771     16:19:12 blockdev_nvme -- scripts/common.sh@355 -- # echo 2
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:43.771    16:19:12 blockdev_nvme -- scripts/common.sh@368 -- # return 0
00:08:43.771    16:19:12 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:43.771    16:19:12 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:43.771  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.771  		--rc genhtml_branch_coverage=1
00:08:43.771  		--rc genhtml_function_coverage=1
00:08:43.771  		--rc genhtml_legend=1
00:08:43.771  		--rc geninfo_all_blocks=1
00:08:43.771  		--rc geninfo_unexecuted_blocks=1
00:08:43.771  		
00:08:43.771  		'
00:08:43.771    16:19:12 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:43.771  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.771  		--rc genhtml_branch_coverage=1
00:08:43.771  		--rc genhtml_function_coverage=1
00:08:43.771  		--rc genhtml_legend=1
00:08:43.771  		--rc geninfo_all_blocks=1
00:08:43.771  		--rc geninfo_unexecuted_blocks=1
00:08:43.771  		
00:08:43.771  		'
00:08:43.771    16:19:12 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:43.771  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.771  		--rc genhtml_branch_coverage=1
00:08:43.771  		--rc genhtml_function_coverage=1
00:08:43.771  		--rc genhtml_legend=1
00:08:43.771  		--rc geninfo_all_blocks=1
00:08:43.771  		--rc geninfo_unexecuted_blocks=1
00:08:43.771  		
00:08:43.771  		'
00:08:43.771    16:19:12 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:43.771  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.771  		--rc genhtml_branch_coverage=1
00:08:43.771  		--rc genhtml_function_coverage=1
00:08:43.771  		--rc genhtml_legend=1
00:08:43.771  		--rc geninfo_all_blocks=1
00:08:43.771  		--rc geninfo_unexecuted_blocks=1
00:08:43.771  		
00:08:43.771  		'
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:08:43.771    16:19:12 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@20 -- # :
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5
00:08:43.771    16:19:12 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']'
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device=
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek=
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx=
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc=
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']'
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]]
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]]
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62058
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 62058
00:08:43.771   16:19:12 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:08:43.771   16:19:12 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 62058 ']'
00:08:43.771   16:19:12 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:43.771   16:19:12 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:43.771   16:19:12 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:43.771  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:43.771   16:19:12 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:43.771   16:19:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:43.771  [2024-12-09 16:19:12.919118] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:43.771  [2024-12-09 16:19:12.919466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62058 ]
00:08:44.031  [2024-12-09 16:19:13.100926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:44.289  [2024-12-09 16:19:13.218387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:45.227   16:19:14 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:45.227   16:19:14 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0
00:08:45.227   16:19:14 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in
00:08:45.227   16:19:14 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf
00:08:45.227   16:19:14 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json
00:08:45.227   16:19:14 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json
00:08:45.227    16:19:14 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:08:45.227   16:19:14 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\'''
00:08:45.227   16:19:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:45.227   16:19:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:45.486   16:19:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:45.486   16:19:14 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine
00:08:45.486   16:19:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:45.486   16:19:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:45.486   16:19:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:45.486   16:19:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat
00:08:45.486    16:19:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel
00:08:45.486    16:19:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:45.486    16:19:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:45.486    16:19:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:45.486    16:19:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev
00:08:45.486    16:19:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:45.486    16:19:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:45.486    16:19:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:45.486    16:19:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf
00:08:45.486    16:19:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:45.486    16:19:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:45.486    16:19:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:45.486   16:19:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs
00:08:45.486    16:19:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs
00:08:45.486    16:19:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)'
00:08:45.486    16:19:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:45.486    16:19:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:45.746    16:19:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:45.746   16:19:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name
00:08:45.746    16:19:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name
00:08:45.747    16:19:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "ae1f3b6e-3c2f-4555-bda3-f9d607b1c822"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "ae1f3b6e-3c2f-4555-bda3-f9d607b1c822",' '  "numa_id": -1,' '  "md_size": 64,' '  "md_interleave": false,' '  "dif_type": 0,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": true,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme1n1",' '  "aliases": [' '    "c6e1fbf4-b470-4720-9849-e4e1c99a1416"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "c6e1fbf4-b470-4720-9849-e4e1c99a1416",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:11.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:11.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12341",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12341",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n1",' '  "aliases": [' '    "2de81b14-0a26-4b29-a8db-09362dd8b10e"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "2de81b14-0a26-4b29-a8db-09362dd8b10e",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n2",' '  "aliases": [' '    "3c1202f7-fb2e-4a9c-92bc-fc9e997d2b78"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "3c1202f7-fb2e-4a9c-92bc-fc9e997d2b78",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 2,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n3",' '  "aliases": [' '    "82c3a39c-198b-4cdb-91f1-5ce86b4ee0c1"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "82c3a39c-198b-4cdb-91f1-5ce86b4ee0c1",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 3,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme3n1",' '  "aliases": [' '    "6a7869d3-5a06-4264-8f2b-f1f53e51ab11"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "6a7869d3-5a06-4264-8f2b-f1f53e51ab11",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:13.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:13.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12343",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": true,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": true' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:08:45.747   16:19:14 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}")
00:08:45.747   16:19:14 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1
00:08:45.747   16:19:14 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT
00:08:45.747   16:19:14 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 62058
00:08:45.747   16:19:14 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 62058 ']'
00:08:45.747   16:19:14 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 62058
00:08:45.747    16:19:14 blockdev_nvme -- common/autotest_common.sh@959 -- # uname
00:08:45.747   16:19:14 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:45.747    16:19:14 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62058
00:08:45.747  killing process with pid 62058
00:08:45.747   16:19:14 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:45.747   16:19:14 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:45.747   16:19:14 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62058'
00:08:45.747   16:19:14 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 62058
00:08:45.747   16:19:14 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 62058
00:08:48.275   16:19:17 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT
00:08:48.275   16:19:17 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:08:48.275   16:19:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:08:48.275   16:19:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:48.275   16:19:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:48.275  ************************************
00:08:48.275  START TEST bdev_hello_world
00:08:48.275  ************************************
00:08:48.275   16:19:17 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:08:48.275  [2024-12-09 16:19:17.257105] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:48.275  [2024-12-09 16:19:17.257236] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62153 ]
00:08:48.275  [2024-12-09 16:19:17.438232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:48.533  [2024-12-09 16:19:17.553266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:49.098  [2024-12-09 16:19:18.205982] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:08:49.098  [2024-12-09 16:19:18.206035] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:08:49.098  [2024-12-09 16:19:18.206058] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:08:49.098  [2024-12-09 16:19:18.208988] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:08:49.098  [2024-12-09 16:19:18.209651] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:08:49.098  [2024-12-09 16:19:18.209687] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:08:49.098  [2024-12-09 16:19:18.209980] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:08:49.098  
00:08:49.098  [2024-12-09 16:19:18.210005] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:08:50.474  ************************************
00:08:50.474  END TEST bdev_hello_world
00:08:50.474  ************************************
00:08:50.474  
00:08:50.474  real	0m2.164s
00:08:50.474  user	0m1.795s
00:08:50.474  sys	0m0.261s
00:08:50.474   16:19:19 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:50.474   16:19:19 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:08:50.474   16:19:19 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds ''
00:08:50.474   16:19:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:50.474   16:19:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:50.474   16:19:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:50.474  ************************************
00:08:50.474  START TEST bdev_bounds
00:08:50.474  ************************************
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62195
00:08:50.474  Process bdevio pid: 62195
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62195'
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62195
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62195 ']'
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:50.474  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:50.474   16:19:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:08:50.474  [2024-12-09 16:19:19.495688] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:50.474  [2024-12-09 16:19:19.496087] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62195 ]
00:08:50.733  [2024-12-09 16:19:19.677505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:50.733  [2024-12-09 16:19:19.798779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:50.733  [2024-12-09 16:19:19.798925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:50.733  [2024-12-09 16:19:19.798982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:51.670   16:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:51.670   16:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:08:51.670   16:19:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:08:51.670  I/O targets:
00:08:51.670    Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB)
00:08:51.670    Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:08:51.670    Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB)
00:08:51.670    Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB)
00:08:51.670    Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB)
00:08:51.670    Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB)
00:08:51.670  
00:08:51.670  
00:08:51.670       CUnit - A unit testing framework for C - Version 2.1-3
00:08:51.670       http://cunit.sourceforge.net/
00:08:51.670  
00:08:51.670  
00:08:51.670  Suite: bdevio tests on: Nvme3n1
00:08:51.670    Test: blockdev write read block ...passed
00:08:51.670    Test: blockdev write zeroes read block ...passed
00:08:51.670    Test: blockdev write zeroes read no split ...passed
00:08:51.670    Test: blockdev write zeroes read split ...passed
00:08:51.670    Test: blockdev write zeroes read split partial ...passed
00:08:51.670    Test: blockdev reset ...[2024-12-09 16:19:20.680137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller
00:08:51.670  [2024-12-09 16:19:20.684223] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed
00:08:51.670    Test: blockdev write read 8 blocks ...uccessful.
00:08:51.670  passed
00:08:51.670    Test: blockdev write read size > 128k ...passed
00:08:51.670    Test: blockdev write read invalid size ...passed
00:08:51.670    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:51.670    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:51.670    Test: blockdev write read max offset ...passed
00:08:51.670    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:51.670    Test: blockdev writev readv 8 blocks ...passed
00:08:51.670    Test: blockdev writev readv 30 x 1block ...passed
00:08:51.670    Test: blockdev writev readv block ...passed
00:08:51.670    Test: blockdev writev readv size > 128k ...passed
00:08:51.670    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:51.670    Test: blockdev comparev and writev ...[2024-12-09 16:19:20.694692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2a0a000 len:0x1000
00:08:51.670  [2024-12-09 16:19:20.694889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:08:51.670  passed
00:08:51.670    Test: blockdev nvme passthru rw ...passed
00:08:51.670    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:19:20.696007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:08:51.670  [2024-12-09 16:19:20.696161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed
00:08:51.670    Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1
00:08:51.670  passed
00:08:51.670    Test: blockdev copy ...passed
00:08:51.670  Suite: bdevio tests on: Nvme2n3
00:08:51.670    Test: blockdev write read block ...passed
00:08:51.670    Test: blockdev write zeroes read block ...passed
00:08:51.670    Test: blockdev write zeroes read no split ...passed
00:08:51.670    Test: blockdev write zeroes read split ...passed
00:08:51.670    Test: blockdev write zeroes read split partial ...passed
00:08:51.670    Test: blockdev reset ...[2024-12-09 16:19:20.779350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:08:51.670  [2024-12-09 16:19:20.783616] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed
00:08:51.670    Test: blockdev write read 8 blocks ...uccessful.
00:08:51.670  passed
00:08:51.670    Test: blockdev write read size > 128k ...passed
00:08:51.670    Test: blockdev write read invalid size ...passed
00:08:51.670    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:51.670    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:51.670    Test: blockdev write read max offset ...passed
00:08:51.670    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:51.670    Test: blockdev writev readv 8 blocks ...passed
00:08:51.670    Test: blockdev writev readv 30 x 1block ...passed
00:08:51.670    Test: blockdev writev readv block ...passed
00:08:51.670    Test: blockdev writev readv size > 128k ...passed
00:08:51.671    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:51.671    Test: blockdev comparev and writev ...[2024-12-09 16:19:20.792019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed
00:08:51.671    Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x295406000 len:0x1000
00:08:51.671  [2024-12-09 16:19:20.792177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:08:51.671  passed
00:08:51.671    Test: blockdev nvme passthru vendor specific ...passed
00:08:51.671    Test: blockdev nvme admin passthru ...[2024-12-09 16:19:20.793027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:08:51.671  [2024-12-09 16:19:20.793059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:08:51.671  passed
00:08:51.671    Test: blockdev copy ...passed
00:08:51.671  Suite: bdevio tests on: Nvme2n2
00:08:51.671    Test: blockdev write read block ...passed
00:08:51.671    Test: blockdev write zeroes read block ...passed
00:08:51.671    Test: blockdev write zeroes read no split ...passed
00:08:51.671    Test: blockdev write zeroes read split ...passed
00:08:51.930    Test: blockdev write zeroes read split partial ...passed
00:08:51.930    Test: blockdev reset ...[2024-12-09 16:19:20.874993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:08:51.930  [2024-12-09 16:19:20.879349] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed
00:08:51.930    Test: blockdev write read 8 blocks ...uccessful.
00:08:51.930  passed
00:08:51.930    Test: blockdev write read size > 128k ...passed
00:08:51.930    Test: blockdev write read invalid size ...passed
00:08:51.930    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:51.930    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:51.930    Test: blockdev write read max offset ...passed
00:08:51.930    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:51.930    Test: blockdev writev readv 8 blocks ...passed
00:08:51.930    Test: blockdev writev readv 30 x 1block ...passed
00:08:51.930    Test: blockdev writev readv block ...passed
00:08:51.930    Test: blockdev writev readv size > 128k ...passed
00:08:51.930    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:51.930    Test: blockdev comparev and writev ...[2024-12-09 16:19:20.888809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2a3c000 len:0x1000
00:08:51.930  [2024-12-09 16:19:20.889006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:08:51.930  passed
00:08:51.930    Test: blockdev nvme passthru rw ...passed
00:08:51.930    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:19:20.890100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:08:51.930  [2024-12-09 16:19:20.890243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:08:51.930  passed
00:08:51.930    Test: blockdev nvme admin passthru ...passed
00:08:51.930    Test: blockdev copy ...passed
00:08:51.930  Suite: bdevio tests on: Nvme2n1
00:08:51.930    Test: blockdev write read block ...passed
00:08:51.930    Test: blockdev write zeroes read block ...passed
00:08:51.930    Test: blockdev write zeroes read no split ...passed
00:08:51.930    Test: blockdev write zeroes read split ...passed
00:08:51.930    Test: blockdev write zeroes read split partial ...passed
00:08:51.930    Test: blockdev reset ...[2024-12-09 16:19:20.968990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:08:51.930  passed
00:08:51.930    Test: blockdev write read 8 blocks ...[2024-12-09 16:19:20.973174] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:08:51.930  passed
00:08:51.930    Test: blockdev write read size > 128k ...passed
00:08:51.930    Test: blockdev write read invalid size ...passed
00:08:51.930    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:51.930    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:51.930    Test: blockdev write read max offset ...passed
00:08:51.930    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:51.930    Test: blockdev writev readv 8 blocks ...passed
00:08:51.930    Test: blockdev writev readv 30 x 1block ...passed
00:08:51.930    Test: blockdev writev readv block ...passed
00:08:51.930    Test: blockdev writev readv size > 128k ...passed
00:08:51.930    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:51.930    Test: blockdev comparev and writev ...[2024-12-09 16:19:20.981235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed
00:08:51.930    Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c2a38000 len:0x1000
00:08:51.930  [2024-12-09 16:19:20.981399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:08:51.930  passed
00:08:51.930    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:19:20.982311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:08:51.930  [2024-12-09 16:19:20.982344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:08:51.930  passed
00:08:51.930    Test: blockdev nvme admin passthru ...passed
00:08:51.930    Test: blockdev copy ...passed
00:08:51.930  Suite: bdevio tests on: Nvme1n1
00:08:51.930    Test: blockdev write read block ...passed
00:08:51.930    Test: blockdev write zeroes read block ...passed
00:08:51.930    Test: blockdev write zeroes read no split ...passed
00:08:51.930    Test: blockdev write zeroes read split ...passed
00:08:51.930    Test: blockdev write zeroes read split partial ...passed
00:08:51.930    Test: blockdev reset ...[2024-12-09 16:19:21.061116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:08:51.930  passed
00:08:51.930    Test: blockdev write read 8 blocks ...[2024-12-09 16:19:21.064936] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful.
00:08:51.930  passed
00:08:51.930    Test: blockdev write read size > 128k ...passed
00:08:51.930    Test: blockdev write read invalid size ...passed
00:08:51.930    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:51.930    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:51.930    Test: blockdev write read max offset ...passed
00:08:51.930    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:51.930    Test: blockdev writev readv 8 blocks ...passed
00:08:51.930    Test: blockdev writev readv 30 x 1block ...passed
00:08:51.930    Test: blockdev writev readv block ...passed
00:08:51.930    Test: blockdev writev readv size > 128k ...passed
00:08:51.930    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:51.930    Test: blockdev comparev and writev ...[2024-12-09 16:19:21.073177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2a34000 len:0x1000
00:08:51.930  [2024-12-09 16:19:21.073233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:08:51.930  passed
00:08:51.930    Test: blockdev nvme passthru rw ...passed
00:08:51.930    Test: blockdev nvme passthru vendor specific ...passed
00:08:51.930    Test: blockdev nvme admin passthru ...[2024-12-09 16:19:21.074117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:08:51.930  [2024-12-09 16:19:21.074159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:08:51.930  passed
00:08:51.930    Test: blockdev copy ...passed
00:08:51.930  Suite: bdevio tests on: Nvme0n1
00:08:51.930    Test: blockdev write read block ...passed
00:08:51.930    Test: blockdev write zeroes read block ...passed
00:08:51.930    Test: blockdev write zeroes read no split ...passed
00:08:52.189    Test: blockdev write zeroes read split ...passed
00:08:52.189    Test: blockdev write zeroes read split partial ...passed
00:08:52.189    Test: blockdev reset ...[2024-12-09 16:19:21.153753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:08:52.189  [2024-12-09 16:19:21.157547] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed
00:08:52.189    Test: blockdev write read 8 blocks ...uccessful.
00:08:52.189  passed
00:08:52.189    Test: blockdev write read size > 128k ...passed
00:08:52.189    Test: blockdev write read invalid size ...passed
00:08:52.189    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:52.189    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:52.189    Test: blockdev write read max offset ...passed
00:08:52.189    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:52.189    Test: blockdev writev readv 8 blocks ...passed
00:08:52.189    Test: blockdev writev readv 30 x 1block ...passed
00:08:52.189    Test: blockdev writev readv block ...passed
00:08:52.189    Test: blockdev writev readv size > 128k ...passed
00:08:52.189    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:52.189    Test: blockdev comparev and writev ...passed
00:08:52.189    Test: blockdev nvme passthru rw ...[2024-12-09 16:19:21.165866] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has
00:08:52.189  separate metadata which is not supported yet.
00:08:52.189  passed
00:08:52.189    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:19:21.166814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0
00:08:52.189  [2024-12-09 16:19:21.167006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1
00:08:52.189  passed
00:08:52.189    Test: blockdev nvme admin passthru ...passed
00:08:52.189    Test: blockdev copy ...passed
00:08:52.189  
00:08:52.189  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:52.189                suites      6      6    n/a      0        0
00:08:52.189                 tests    138    138    138      0        0
00:08:52.189               asserts    893    893    893      0      n/a
00:08:52.189  
00:08:52.189  Elapsed time =    1.522 seconds
00:08:52.189  0
00:08:52.189   16:19:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62195
00:08:52.189   16:19:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62195 ']'
00:08:52.189   16:19:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62195
00:08:52.189    16:19:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:08:52.189   16:19:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:52.189    16:19:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62195
00:08:52.189  killing process with pid 62195
00:08:52.189   16:19:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:52.189   16:19:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:52.189   16:19:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62195'
00:08:52.189   16:19:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62195
00:08:52.189   16:19:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62195
00:08:53.126  ************************************
00:08:53.126  END TEST bdev_bounds
00:08:53.126  ************************************
00:08:53.126   16:19:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:08:53.126  
00:08:53.126  real	0m2.892s
00:08:53.126  user	0m7.390s
00:08:53.126  sys	0m0.421s
00:08:53.126   16:19:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:53.126   16:19:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:08:53.386   16:19:22 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:08:53.386   16:19:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:08:53.386   16:19:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:53.386   16:19:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:53.386  ************************************
00:08:53.386  START TEST bdev_nbd
00:08:53.386  ************************************
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:08:53.386    16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62260
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62260 /var/tmp/spdk-nbd.sock
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62260 ']'
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:08:53.386  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:53.386   16:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:08:53.386  [2024-12-09 16:19:22.475846] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:08:53.386  [2024-12-09 16:19:22.476211] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:53.646  [2024-12-09 16:19:22.647541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:53.646  [2024-12-09 16:19:22.756715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:54.660    16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:08:54.660    16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:54.660  1+0 records in
00:08:54.660  1+0 records out
00:08:54.660  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608132 s, 6.7 MB/s
00:08:54.660    16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:54.660   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:54.660    16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1
00:08:54.919   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:08:54.919    16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:08:54.919   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:08:54.919   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:08:54.919   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:54.919   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:54.919   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:54.919   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:08:54.919   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:54.919   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:54.919   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:54.919   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:54.919  1+0 records in
00:08:54.919  1+0 records out
00:08:54.919  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676523 s, 6.1 MB/s
00:08:54.919    16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:54.920   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:54.920   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:54.920   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:54.920   16:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:54.920   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:54.920   16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:54.920    16:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:08:55.179    16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:55.179  1+0 records in
00:08:55.179  1+0 records out
00:08:55.179  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000783798 s, 5.2 MB/s
00:08:55.179    16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:55.179   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:55.179    16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:08:55.438    16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:55.438  1+0 records in
00:08:55.438  1+0 records out
00:08:55.438  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599429 s, 6.8 MB/s
00:08:55.438    16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:55.438   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:55.438    16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:08:55.696    16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:55.696  1+0 records in
00:08:55.696  1+0 records out
00:08:55.696  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786192 s, 5.2 MB/s
00:08:55.696    16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:55.696   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:55.696    16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1
00:08:55.956   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:08:55.956    16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:08:55.956   16:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:08:55.956   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:08:55.956   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:55.956   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:55.956   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:55.956   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:08:55.956   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:55.956   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:55.956   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:55.956   16:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:55.956  1+0 records in
00:08:55.956  1+0 records out
00:08:55.956  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00085463 s, 4.8 MB/s
00:08:55.956    16:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:55.956   16:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:55.956   16:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:55.956   16:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:55.956   16:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:55.956   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:55.956   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:55.956    16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:56.213   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:08:56.213    {
00:08:56.213      "nbd_device": "/dev/nbd0",
00:08:56.213      "bdev_name": "Nvme0n1"
00:08:56.213    },
00:08:56.213    {
00:08:56.213      "nbd_device": "/dev/nbd1",
00:08:56.213      "bdev_name": "Nvme1n1"
00:08:56.213    },
00:08:56.213    {
00:08:56.213      "nbd_device": "/dev/nbd2",
00:08:56.213      "bdev_name": "Nvme2n1"
00:08:56.213    },
00:08:56.213    {
00:08:56.213      "nbd_device": "/dev/nbd3",
00:08:56.213      "bdev_name": "Nvme2n2"
00:08:56.213    },
00:08:56.213    {
00:08:56.213      "nbd_device": "/dev/nbd4",
00:08:56.213      "bdev_name": "Nvme2n3"
00:08:56.213    },
00:08:56.213    {
00:08:56.214      "nbd_device": "/dev/nbd5",
00:08:56.214      "bdev_name": "Nvme3n1"
00:08:56.214    }
00:08:56.214  ]'
00:08:56.214   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:08:56.214    16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:08:56.214    16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:08:56.214    {
00:08:56.214      "nbd_device": "/dev/nbd0",
00:08:56.214      "bdev_name": "Nvme0n1"
00:08:56.214    },
00:08:56.214    {
00:08:56.214      "nbd_device": "/dev/nbd1",
00:08:56.214      "bdev_name": "Nvme1n1"
00:08:56.214    },
00:08:56.214    {
00:08:56.214      "nbd_device": "/dev/nbd2",
00:08:56.214      "bdev_name": "Nvme2n1"
00:08:56.214    },
00:08:56.214    {
00:08:56.214      "nbd_device": "/dev/nbd3",
00:08:56.214      "bdev_name": "Nvme2n2"
00:08:56.214    },
00:08:56.214    {
00:08:56.214      "nbd_device": "/dev/nbd4",
00:08:56.214      "bdev_name": "Nvme2n3"
00:08:56.214    },
00:08:56.214    {
00:08:56.214      "nbd_device": "/dev/nbd5",
00:08:56.214      "bdev_name": "Nvme3n1"
00:08:56.214    }
00:08:56.214  ]'
00:08:56.214   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5'
00:08:56.214   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:56.214   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5')
00:08:56.214   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:56.214   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:08:56.214   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:56.214   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:56.472    16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:56.472   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:56.472   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:56.472   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:56.472   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:56.472   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:56.472   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:56.472   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:56.472   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:56.472   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:08:56.731    16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:08:56.731    16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:56.731   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:08:56.990   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:56.990   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:56.990   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:56.990   16:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:08:56.990    16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:08:56.990   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:08:56.990   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:08:56.990   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:56.990   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:56.990   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:08:56.990   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:56.990   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:56.990   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:56.990   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:08:57.249    16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:08:57.250   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:08:57.250   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:08:57.250   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:57.250   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:57.250   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:08:57.250   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:57.250   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:57.250   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:57.250   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:08:57.508    16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:08:57.508   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:08:57.508   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:08:57.508   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:57.508   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:57.508   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:08:57.508   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:57.508   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:57.508    16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:57.508    16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:57.508     16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:57.767    16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:57.767     16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:57.767     16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:57.767    16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:57.767     16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:57.767     16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:08:57.767     16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:08:57.767    16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:08:57.767    16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:57.767   16:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:08:58.027  /dev/nbd0
00:08:58.027    16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:58.027  1+0 records in
00:08:58.027  1+0 records out
00:08:58.027  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582815 s, 7.0 MB/s
00:08:58.027    16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:58.027   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1
00:08:58.286  /dev/nbd1
00:08:58.286    16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:58.286  1+0 records in
00:08:58.286  1+0 records out
00:08:58.286  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680868 s, 6.0 MB/s
00:08:58.286    16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:58.286   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10
00:08:58.545  /dev/nbd10
00:08:58.545    16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:58.545  1+0 records in
00:08:58.545  1+0 records out
00:08:58.545  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730238 s, 5.6 MB/s
00:08:58.545    16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:58.545   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11
00:08:58.804  /dev/nbd11
00:08:58.804    16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:58.804  1+0 records in
00:08:58.804  1+0 records out
00:08:58.804  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805483 s, 5.1 MB/s
00:08:58.804    16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:58.804   16:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12
00:08:59.064  /dev/nbd12
00:08:59.064    16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:08:59.064   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:08:59.064   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:08:59.064   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:59.065  1+0 records in
00:08:59.065  1+0 records out
00:08:59.065  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000762017 s, 5.4 MB/s
00:08:59.065    16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:59.065   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13
00:08:59.325  /dev/nbd13
00:08:59.325    16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:59.325  1+0 records in
00:08:59.325  1+0 records out
00:08:59.325  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000976722 s, 4.2 MB/s
00:08:59.325    16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:59.325   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:59.325    16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:59.325    16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:59.325     16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:59.584    16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd0",
00:08:59.584      "bdev_name": "Nvme0n1"
00:08:59.584    },
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd1",
00:08:59.584      "bdev_name": "Nvme1n1"
00:08:59.584    },
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd10",
00:08:59.584      "bdev_name": "Nvme2n1"
00:08:59.584    },
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd11",
00:08:59.584      "bdev_name": "Nvme2n2"
00:08:59.584    },
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd12",
00:08:59.584      "bdev_name": "Nvme2n3"
00:08:59.584    },
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd13",
00:08:59.584      "bdev_name": "Nvme3n1"
00:08:59.584    }
00:08:59.584  ]'
00:08:59.584     16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd0",
00:08:59.584      "bdev_name": "Nvme0n1"
00:08:59.584    },
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd1",
00:08:59.584      "bdev_name": "Nvme1n1"
00:08:59.584    },
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd10",
00:08:59.584      "bdev_name": "Nvme2n1"
00:08:59.584    },
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd11",
00:08:59.584      "bdev_name": "Nvme2n2"
00:08:59.584    },
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd12",
00:08:59.584      "bdev_name": "Nvme2n3"
00:08:59.584    },
00:08:59.584    {
00:08:59.584      "nbd_device": "/dev/nbd13",
00:08:59.584      "bdev_name": "Nvme3n1"
00:08:59.584    }
00:08:59.584  ]'
00:08:59.584     16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:59.584    16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:08:59.584  /dev/nbd1
00:08:59.584  /dev/nbd10
00:08:59.584  /dev/nbd11
00:08:59.584  /dev/nbd12
00:08:59.584  /dev/nbd13'
00:08:59.584     16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:08:59.584  /dev/nbd1
00:08:59.584  /dev/nbd10
00:08:59.584  /dev/nbd11
00:08:59.584  /dev/nbd12
00:08:59.584  /dev/nbd13'
00:08:59.584     16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:59.585    16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6
00:08:59.585    16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']'
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:08:59.585  256+0 records in
00:08:59.585  256+0 records out
00:08:59.585  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052794 s, 199 MB/s
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:08:59.585  256+0 records in
00:08:59.585  256+0 records out
00:08:59.585  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127075 s, 8.3 MB/s
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:59.585   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:08:59.844  256+0 records in
00:08:59.844  256+0 records out
00:08:59.844  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130377 s, 8.0 MB/s
00:08:59.844   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:59.844   16:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:08:59.844  256+0 records in
00:08:59.844  256+0 records out
00:08:59.844  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128002 s, 8.2 MB/s
00:08:59.844   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:59.844   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:09:00.104  256+0 records in
00:09:00.104  256+0 records out
00:09:00.104  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130246 s, 8.1 MB/s
00:09:00.104   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:00.104   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:09:00.364  256+0 records in
00:09:00.364  256+0 records out
00:09:00.364  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131275 s, 8.0 MB/s
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:09:00.364  256+0 records in
00:09:00.364  256+0 records out
00:09:00.364  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136836 s, 7.7 MB/s
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:00.364   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:00.623    16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:00.623   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:00.623   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:00.623   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:00.623   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:00.623   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:00.623   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:00.623   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:00.623   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:00.623   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:00.883    16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:00.883   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:00.883   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:00.883   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:00.883   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:00.883   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:00.883   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:00.883   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:00.883   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:00.883   16:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:09:01.142    16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:09:01.142   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:09:01.142   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:09:01.142   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:01.142   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:01.142   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:09:01.142   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:01.142   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:01.142   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:01.142   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:09:01.401    16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:09:01.401   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:09:01.401   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:09:01.401   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:01.401   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:01.401   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:09:01.401   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:01.401   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:01.401   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:01.401   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:09:01.661    16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:09:01.661   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:09:01.661   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:09:01.661   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:01.661   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:01.661   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:09:01.661   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:01.661   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:01.661   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:01.661   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:09:01.921    16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:09:01.921   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:09:01.921   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:09:01.921   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:01.921   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:01.921   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:09:01.921   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:01.921   16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:01.921    16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:01.921    16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:01.921     16:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:01.921    16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:01.921     16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:01.921     16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:02.180    16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:02.181     16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:09:02.181     16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:02.181     16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:09:02.181    16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:09:02.181    16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:09:02.181   16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:09:02.181   16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:02.181   16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:09:02.181   16:19:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:09:02.181   16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:02.181   16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:09:02.181   16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:09:02.440  malloc_lvol_verify
00:09:02.440   16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:09:02.440  9e270568-c9d4-47f9-938c-f44cc5bef9a7
00:09:02.440   16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:09:02.698  c8b95895-5d6c-494d-8572-519dc8dc60a2
00:09:02.698   16:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:09:02.960  /dev/nbd0
00:09:02.960   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:09:02.960   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:09:02.960   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:09:02.960   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:09:02.960   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:09:02.960  mke2fs 1.47.0 (5-Feb-2023)
00:09:02.960  Discarding device blocks:    0/4096         done                            
00:09:02.960  Creating filesystem with 4096 1k blocks and 1024 inodes
00:09:02.960  
00:09:02.960  Allocating group tables: 0/1   done                            
00:09:02.961  Writing inode tables: 0/1   done                            
00:09:02.961  Creating journal (1024 blocks): done
00:09:02.961  Writing superblocks and filesystem accounting information: 0/1   done
00:09:02.961  
00:09:02.961   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:09:02.961   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:02.961   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:09:02.961   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:02.961   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:09:02.961   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:02.961   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:03.239    16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:03.239   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:03.239   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:03.239   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:03.239   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:03.239   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:03.239   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:03.239   16:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:03.239   16:19:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62260
00:09:03.239   16:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62260 ']'
00:09:03.239   16:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62260
00:09:03.239    16:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:09:03.239   16:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:03.239    16:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62260
00:09:03.240  killing process with pid 62260
00:09:03.240   16:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:03.240   16:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:03.240   16:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62260'
00:09:03.240   16:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62260
00:09:03.240   16:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62260
00:09:04.616   16:19:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:09:04.616  
00:09:04.616  real	0m11.162s
00:09:04.616  user	0m14.406s
00:09:04.616  sys	0m4.688s
00:09:04.616   16:19:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:04.616   16:19:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:09:04.616  ************************************
00:09:04.616  END TEST bdev_nbd
00:09:04.616  ************************************
00:09:04.616   16:19:33 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]]
00:09:04.616   16:19:33 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']'
00:09:04.616  skipping fio tests on NVMe due to multi-ns failures.
00:09:04.616   16:19:33 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:09:04.616   16:19:33 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT
00:09:04.616   16:19:33 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:09:04.616   16:19:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:09:04.616   16:19:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:04.617   16:19:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:09:04.617  ************************************
00:09:04.617  START TEST bdev_verify
00:09:04.617  ************************************
00:09:04.617   16:19:33 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:09:04.617  [2024-12-09 16:19:33.701834] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:09:04.617  [2024-12-09 16:19:33.701987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62646 ]
00:09:04.876  [2024-12-09 16:19:33.882051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:04.876  [2024-12-09 16:19:34.003770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:04.876  [2024-12-09 16:19:34.003801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:05.813  Running I/O for 5 seconds...
00:09:07.686      20544.00 IOPS,    80.25 MiB/s
[2024-12-09T16:19:38.242Z]     21344.00 IOPS,    83.38 MiB/s
[2024-12-09T16:19:39.178Z]     21504.00 IOPS,    84.00 MiB/s
[2024-12-09T16:19:40.114Z]     22016.00 IOPS,    86.00 MiB/s
[2024-12-09T16:19:40.114Z]     22720.00 IOPS,    88.75 MiB/s
00:09:10.935                                                                                                  Latency(us)
00:09:10.935  
[2024-12-09T16:19:40.114Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:10.935  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0x0 length 0xbd0bd
00:09:10.935  	 Nvme0n1             :       5.05    1899.60       7.42       0.00     0.00   67217.68   15370.69   90539.69
00:09:10.935  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0xbd0bd length 0xbd0bd
00:09:10.935  	 Nvme0n1             :       5.05    1851.92       7.23       0.00     0.00   68931.18   15054.86   89276.35
00:09:10.935  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0x0 length 0xa0000
00:09:10.935  	 Nvme1n1             :       5.05    1899.12       7.42       0.00     0.00   67115.33   14633.74   82959.63
00:09:10.935  Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0xa0000 length 0xa0000
00:09:10.935  	 Nvme1n1             :       5.05    1850.94       7.23       0.00     0.00   68833.07   16949.87   81696.28
00:09:10.935  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0x0 length 0x80000
00:09:10.935  	 Nvme2n1             :       5.06    1898.15       7.41       0.00     0.00   66914.01   14528.46   70326.18
00:09:10.935  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0x80000 length 0x80000
00:09:10.935  	 Nvme2n1             :       5.05    1850.50       7.23       0.00     0.00   68620.41   19792.40   69483.95
00:09:10.935  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0x0 length 0x80000
00:09:10.935  	 Nvme2n2             :       5.06    1897.70       7.41       0.00     0.00   66766.99   14002.07   61061.65
00:09:10.935  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0x80000 length 0x80000
00:09:10.935  	 Nvme2n2             :       5.11    1853.84       7.24       0.00     0.00   68416.81   18318.50   63588.34
00:09:10.935  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0x0 length 0x80000
00:09:10.935  	 Nvme2n3             :       5.09    1912.16       7.47       0.00     0.00   66186.24   10475.23   62325.00
00:09:10.935  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0x80000 length 0x80000
00:09:10.935  	 Nvme2n3             :       5.11    1852.88       7.24       0.00     0.00   68297.47   17370.99   65272.80
00:09:10.935  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0x0 length 0x20000
00:09:10.935  	 Nvme3n1             :       5.09    1911.71       7.47       0.00     0.00   66065.33    6895.76   64851.69
00:09:10.935  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:10.935  	 Verification LBA range: start 0x20000 length 0x20000
00:09:10.935  	 Nvme3n1             :       5.11    1852.49       7.24       0.00     0.00   68168.18   13896.79   66115.03
00:09:10.935  
[2024-12-09T16:19:40.114Z]  ===================================================================================================================
00:09:10.935  
[2024-12-09T16:19:40.114Z]  Total                       :              22530.99      88.01       0.00     0.00   67614.28    6895.76   90539.69
00:09:12.314  ************************************
00:09:12.314  END TEST bdev_verify
00:09:12.314  ************************************
00:09:12.314  
00:09:12.314  real	0m7.727s
00:09:12.314  user	0m14.312s
00:09:12.314  sys	0m0.294s
00:09:12.314   16:19:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:12.314   16:19:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:09:12.314   16:19:41 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:09:12.314   16:19:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:09:12.314   16:19:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:12.314   16:19:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:09:12.314  ************************************
00:09:12.314  START TEST bdev_verify_big_io
00:09:12.314  ************************************
00:09:12.314   16:19:41 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:09:12.314  [2024-12-09 16:19:41.488052] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:09:12.314  [2024-12-09 16:19:41.488179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62749 ]
00:09:12.574  [2024-12-09 16:19:41.658678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:12.833  [2024-12-09 16:19:41.772828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:12.833  [2024-12-09 16:19:41.772860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:13.819  Running I/O for 5 seconds...
00:09:18.260       1956.00 IOPS,   122.25 MiB/s
[2024-12-09T16:19:48.007Z]      3282.00 IOPS,   205.12 MiB/s
[2024-12-09T16:19:48.575Z]      2815.00 IOPS,   175.94 MiB/s
[2024-12-09T16:19:48.575Z]      3013.00 IOPS,   188.31 MiB/s
00:09:19.396                                                                                                  Latency(us)
00:09:19.396  
[2024-12-09T16:19:48.575Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:19.396  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0x0 length 0xbd0b
00:09:19.396  	 Nvme0n1             :       5.53     163.97      10.25       0.00     0.00  743234.14   25266.89  788327.02
00:09:19.396  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0xbd0b length 0xbd0b
00:09:19.396  	 Nvme0n1             :       5.52     173.82      10.86       0.00     0.00  713123.69   26424.96  784958.10
00:09:19.396  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0x0 length 0xa000
00:09:19.396  	 Nvme1n1             :       5.58     172.10      10.76       0.00     0.00  708776.67   48849.32  663677.02
00:09:19.396  Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0xa000 length 0xa000
00:09:19.396  	 Nvme1n1             :       5.53     173.72      10.86       0.00     0.00  697683.80   69905.07  643463.51
00:09:19.396  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0x0 length 0x8000
00:09:19.396  	 Nvme2n1             :       5.65     176.61      11.04       0.00     0.00  675877.01   25372.17  683890.53
00:09:19.396  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0x8000 length 0x8000
00:09:19.396  	 Nvme2n1             :       5.61     178.81      11.18       0.00     0.00  665959.46   44848.73  656939.18
00:09:19.396  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0x0 length 0x8000
00:09:19.396  	 Nvme2n2             :       5.65     177.46      11.09       0.00     0.00  657114.16   26846.07  697366.21
00:09:19.396  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0x8000 length 0x8000
00:09:19.396  	 Nvme2n2             :       5.61     182.48      11.41       0.00     0.00  640581.91   35373.65  670414.86
00:09:19.396  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0x0 length 0x8000
00:09:19.396  	 Nvme2n3             :       5.69     180.29      11.27       0.00     0.00  630195.20   43585.39  734424.31
00:09:19.396  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0x8000 length 0x8000
00:09:19.396  	 Nvme2n3             :       5.65     185.35      11.58       0.00     0.00  614499.00   40005.91  687259.45
00:09:19.396  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0x0 length 0x2000
00:09:19.396  	 Nvme3n1             :       5.70     198.26      12.39       0.00     0.00  564446.59    7737.99  724317.56
00:09:19.396  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:19.396  	 Verification LBA range: start 0x2000 length 0x2000
00:09:19.396  	 Nvme3n1             :       5.69     202.58      12.66       0.00     0.00  552615.10    2947.80  704104.04
00:09:19.396  
[2024-12-09T16:19:48.575Z]  ===================================================================================================================
00:09:19.396  
[2024-12-09T16:19:48.575Z]  Total                       :               2165.44     135.34       0.00     0.00  651655.02    2947.80  788327.02
00:09:21.302  
00:09:21.302  real	0m8.832s
00:09:21.302  user	0m16.535s
00:09:21.302  sys	0m0.317s
00:09:21.302   16:19:50 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:21.302   16:19:50 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:09:21.302  ************************************
00:09:21.302  END TEST bdev_verify_big_io
00:09:21.302  ************************************
00:09:21.302   16:19:50 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:21.302   16:19:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:09:21.302   16:19:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:21.302   16:19:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:09:21.302  ************************************
00:09:21.302  START TEST bdev_write_zeroes
00:09:21.302  ************************************
00:09:21.302   16:19:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:21.302  [2024-12-09 16:19:50.387678] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:09:21.302  [2024-12-09 16:19:50.387804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62864 ]
00:09:21.561  [2024-12-09 16:19:50.567244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:21.561  [2024-12-09 16:19:50.673295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:22.496  Running I/O for 1 seconds...
00:09:23.506      73727.00 IOPS,   288.00 MiB/s
00:09:23.506                                                                                                  Latency(us)
00:09:23.506  
[2024-12-09T16:19:52.685Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:23.506  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:23.506  	 Nvme0n1             :       1.02   12262.95      47.90       0.00     0.00   10418.08    4869.14   23582.43
00:09:23.506  Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:23.506  	 Nvme1n1             :       1.02   12252.26      47.86       0.00     0.00   10416.63    9317.17   23898.27
00:09:23.506  Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:23.506  	 Nvme2n1             :       1.02   12240.69      47.82       0.00     0.00   10401.64    8896.05   23056.04
00:09:23.506  Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:23.506  	 Nvme2n2             :       1.02   12229.96      47.77       0.00     0.00   10368.18    8580.22   19476.56
00:09:23.506  Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:23.506  	 Nvme2n3             :       1.02   12219.46      47.73       0.00     0.00   10355.60    8369.66   19160.73
00:09:23.506  Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:23.506  	 Nvme3n1             :       1.02   12209.10      47.69       0.00     0.00   10324.69    7053.67   19160.73
00:09:23.506  
[2024-12-09T16:19:52.685Z]  ===================================================================================================================
00:09:23.506  
[2024-12-09T16:19:52.685Z]  Total                       :              73414.42     286.78       0.00     0.00   10380.80    4869.14   23898.27
00:09:24.443  
00:09:24.443  real	0m3.179s
00:09:24.443  user	0m2.820s
00:09:24.443  sys	0m0.245s
00:09:24.443   16:19:53 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:24.443   16:19:53 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:09:24.443  ************************************
00:09:24.443  END TEST bdev_write_zeroes
00:09:24.443  ************************************
00:09:24.443   16:19:53 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:24.443   16:19:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:09:24.443   16:19:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:24.443   16:19:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:09:24.443  ************************************
00:09:24.443  START TEST bdev_json_nonenclosed
00:09:24.443  ************************************
00:09:24.443   16:19:53 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:24.702  [2024-12-09 16:19:53.642419] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:09:24.702  [2024-12-09 16:19:53.642553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62917 ]
00:09:24.702  [2024-12-09 16:19:53.819131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:24.960  [2024-12-09 16:19:53.928532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:24.960  [2024-12-09 16:19:53.928646] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:09:24.960  [2024-12-09 16:19:53.928669] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:09:24.960  [2024-12-09 16:19:53.928681] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:09:25.219  
00:09:25.219  real	0m0.617s
00:09:25.219  user	0m0.377s
00:09:25.219  sys	0m0.137s
00:09:25.219   16:19:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:25.219  ************************************
00:09:25.219  END TEST bdev_json_nonenclosed
00:09:25.219  ************************************
00:09:25.219   16:19:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:09:25.219   16:19:54 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:25.219   16:19:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:09:25.219   16:19:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:25.219   16:19:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:09:25.219  ************************************
00:09:25.219  START TEST bdev_json_nonarray
00:09:25.219  ************************************
00:09:25.219   16:19:54 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:25.219  [2024-12-09 16:19:54.330684] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:09:25.219  [2024-12-09 16:19:54.330809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62942 ]
00:09:25.477  [2024-12-09 16:19:54.506724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:25.477  [2024-12-09 16:19:54.605657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:25.477  [2024-12-09 16:19:54.605767] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:09:25.477  [2024-12-09 16:19:54.605790] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:09:25.477  [2024-12-09 16:19:54.605801] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:09:25.736  
00:09:25.736  real	0m0.603s
00:09:25.736  user	0m0.360s
00:09:25.736  sys	0m0.139s
00:09:25.736   16:19:54 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:25.736   16:19:54 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:09:25.736  ************************************
00:09:25.736  END TEST bdev_json_nonarray
00:09:25.736  ************************************
00:09:25.736   16:19:54 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]]
00:09:25.736   16:19:54 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]]
00:09:25.736   16:19:54 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]]
00:09:25.736   16:19:54 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT
00:09:25.736   16:19:54 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup
00:09:25.736   16:19:54 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:09:25.994   16:19:54 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:09:25.994   16:19:54 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]]
00:09:25.994   16:19:54 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]]
00:09:25.994   16:19:54 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]]
00:09:25.994   16:19:54 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]]
00:09:25.994  
00:09:25.994  real	0m42.361s
00:09:25.994  user	1m2.693s
00:09:25.994  sys	0m7.677s
00:09:25.994   16:19:54 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:25.994   16:19:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:09:25.994  ************************************
00:09:25.994  END TEST blockdev_nvme
00:09:25.994  ************************************
00:09:25.994    16:19:54  -- spdk/autotest.sh@209 -- # uname -s
00:09:25.994   16:19:54  -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]]
00:09:25.994   16:19:54  -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:09:25.995   16:19:54  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:25.995   16:19:54  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:25.995   16:19:54  -- common/autotest_common.sh@10 -- # set +x
00:09:25.995  ************************************
00:09:25.995  START TEST blockdev_nvme_gpt
00:09:25.995  ************************************
00:09:25.995   16:19:54 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:09:25.995  * Looking for test storage...
00:09:25.995  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:09:25.995    16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:25.995     16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version
00:09:25.995     16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:26.254    16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-:
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-:
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<'
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:26.254     16:19:55 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1
00:09:26.254     16:19:55 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1
00:09:26.254     16:19:55 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:26.254     16:19:55 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1
00:09:26.254     16:19:55 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2
00:09:26.254     16:19:55 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2
00:09:26.254     16:19:55 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:26.254     16:19:55 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:26.254    16:19:55 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0
00:09:26.254    16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:26.254    16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:26.254  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.254  		--rc genhtml_branch_coverage=1
00:09:26.254  		--rc genhtml_function_coverage=1
00:09:26.254  		--rc genhtml_legend=1
00:09:26.254  		--rc geninfo_all_blocks=1
00:09:26.254  		--rc geninfo_unexecuted_blocks=1
00:09:26.254  		
00:09:26.254  		'
00:09:26.254    16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:26.254  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.254  		--rc genhtml_branch_coverage=1
00:09:26.254  		--rc genhtml_function_coverage=1
00:09:26.254  		--rc genhtml_legend=1
00:09:26.254  		--rc geninfo_all_blocks=1
00:09:26.254  		--rc geninfo_unexecuted_blocks=1
00:09:26.254  		
00:09:26.254  		'
00:09:26.254    16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:26.254  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.254  		--rc genhtml_branch_coverage=1
00:09:26.254  		--rc genhtml_function_coverage=1
00:09:26.254  		--rc genhtml_legend=1
00:09:26.254  		--rc geninfo_all_blocks=1
00:09:26.254  		--rc geninfo_unexecuted_blocks=1
00:09:26.254  		
00:09:26.254  		'
00:09:26.254    16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:26.254  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.254  		--rc genhtml_branch_coverage=1
00:09:26.254  		--rc genhtml_function_coverage=1
00:09:26.254  		--rc genhtml_legend=1
00:09:26.254  		--rc geninfo_all_blocks=1
00:09:26.254  		--rc geninfo_unexecuted_blocks=1
00:09:26.254  		
00:09:26.254  		'
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:09:26.254    16:19:55 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # :
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5
00:09:26.254    16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']'
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device=
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek=
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx=
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc=
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']'
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]]
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]]
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63021
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:09:26.254   16:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 63021
00:09:26.254   16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 63021 ']'
00:09:26.254   16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:26.254   16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:26.254  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:26.254   16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:26.254   16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:26.254   16:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:26.254  [2024-12-09 16:19:55.342372] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:09:26.255  [2024-12-09 16:19:55.342493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63021 ]
00:09:26.511  [2024-12-09 16:19:55.524838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:26.511  [2024-12-09 16:19:55.641031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:27.445   16:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:27.445   16:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0
00:09:27.445   16:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in
00:09:27.445   16:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf
00:09:27.445   16:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:09:28.011  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:09:28.269  Waiting for block devices as requested
00:09:28.270  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:09:28.270  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:09:28.528  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:09:28.528  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:09:33.796  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:09:33.796   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:33.796   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1')
00:09:33.796   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev
00:09:33.796   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme=
00:09:33.796   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}"
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]]
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1
00:09:33.797    16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label
00:09:33.797  BYT;
00:09:33.797  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;'
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label
00:09:33.797  BYT;
00:09:33.797  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]]
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]]
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100%
00:09:33.797    16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()'
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _
00:09:33.797     16:20:02 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:09:33.797    16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()'
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _
00:09:33.797     16:20:02 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b
00:09:33.797    16:20:02 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b
00:09:33.797   16:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1
00:09:34.734  The operation has completed successfully.
00:09:34.734   16:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1
00:09:36.124  The operation has completed successfully.
00:09:36.124   16:20:04 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:09:36.694  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:09:37.262  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:09:37.262  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:09:37.262  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:09:37.262  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:09:37.520   16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs
00:09:37.520   16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:37.520   16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:37.520  []
00:09:37.520   16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:37.520   16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf
00:09:37.520   16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json
00:09:37.520   16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json
00:09:37.520    16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:09:37.521   16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\'''
00:09:37.521   16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:37.521   16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:37.780   16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:37.780   16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine
00:09:37.780   16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:37.780   16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:37.780   16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:37.780   16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat
00:09:37.780    16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel
00:09:37.780    16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:37.780    16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:37.780    16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:37.780    16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev
00:09:37.780    16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:37.780    16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:38.040    16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:38.040    16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf
00:09:38.040    16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:38.040    16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:38.040    16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:38.040   16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs
00:09:38.040    16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs
00:09:38.040    16:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)'
00:09:38.040    16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:38.040    16:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:38.040    16:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:38.040   16:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name
00:09:38.040    16:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name
00:09:38.040    16:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "9fbdb804-199b-45d9-80f1-43918c316ee1"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "9fbdb804-199b-45d9-80f1-43918c316ee1",' '  "numa_id": -1,' '  "md_size": 64,' '  "md_interleave": false,' '  "dif_type": 0,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": true,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme1n1p1",' '  "aliases": [' '    "6f89f330-603b-4116-ac73-2ca8eae53030"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655104,' '  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme1n1",' '      "offset_blocks": 256,' '      "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' '      "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '      "partition_name": "SPDK_TEST_first"' '    }' '  }' '}' '{' '  "name": "Nvme1n1p2",' '  "aliases": [' '    "abf1734f-66e5-4c0f-aa29-4021d4d307df"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655103,' '  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme1n1",' '      "offset_blocks": 655360,' '      "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' '      "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '      "partition_name": "SPDK_TEST_second"' '    }' '  }' '}' '{' '  "name": "Nvme2n1",' '  "aliases": [' '    "66367e48-fc9f-4058-adff-26d694bef236"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "66367e48-fc9f-4058-adff-26d694bef236",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n2",' '  "aliases": [' '    "99deb6d7-e4fc-4ead-a3b7-9b42c7e21a08"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "99deb6d7-e4fc-4ead-a3b7-9b42c7e21a08",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 2,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n3",' '  "aliases": [' '    "cb579af0-4d2a-4d0a-95e4-78e76badd9fc"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "cb579af0-4d2a-4d0a-95e4-78e76badd9fc",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 3,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme3n1",' '  "aliases": [' '    "4cd3b23f-ea2b-4fc5-94a8-76b9384793fa"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "4cd3b23f-ea2b-4fc5-94a8-76b9384793fa",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:13.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:13.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12343",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": true,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": true' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:09:38.040   16:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}")
00:09:38.040   16:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1
00:09:38.040   16:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT
00:09:38.040   16:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 63021
00:09:38.040   16:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 63021 ']'
00:09:38.040   16:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 63021
00:09:38.040    16:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname
00:09:38.040   16:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:38.040    16:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63021
00:09:38.040   16:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:38.040   16:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:38.040   16:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63021'
00:09:38.040  killing process with pid 63021
00:09:38.041   16:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 63021
00:09:38.041   16:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 63021
00:09:40.577   16:20:09 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT
00:09:40.577   16:20:09 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:09:40.577   16:20:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:09:40.577   16:20:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:40.577   16:20:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:40.577  ************************************
00:09:40.577  START TEST bdev_hello_world
00:09:40.577  ************************************
00:09:40.577   16:20:09 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:09:40.577  [2024-12-09 16:20:09.676708] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:09:40.577  [2024-12-09 16:20:09.676823] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63669 ]
00:09:40.836  [2024-12-09 16:20:09.855130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:40.836  [2024-12-09 16:20:09.965740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:41.775  [2024-12-09 16:20:10.621110] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:09:41.775  [2024-12-09 16:20:10.621160] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:09:41.775  [2024-12-09 16:20:10.621184] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:09:41.775  [2024-12-09 16:20:10.624106] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:09:41.775  [2024-12-09 16:20:10.624849] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:09:41.775  [2024-12-09 16:20:10.624884] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:09:41.775  [2024-12-09 16:20:10.625126] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:09:41.775  
00:09:41.775  [2024-12-09 16:20:10.625153] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:09:42.713  
00:09:42.713  real	0m2.161s
00:09:42.713  user	0m1.811s
00:09:42.713  sys	0m0.243s
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:42.713  ************************************
00:09:42.713  END TEST bdev_hello_world
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:09:42.713  ************************************
00:09:42.713   16:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds ''
00:09:42.713   16:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:42.713   16:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:42.713   16:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:42.713  ************************************
00:09:42.713  START TEST bdev_bounds
00:09:42.713  ************************************
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63717
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:09:42.713  Process bdevio pid: 63717
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63717'
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63717
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63717 ']'
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:42.713  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:09:42.713   16:20:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:09:42.972  [2024-12-09 16:20:11.924292] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:09:42.972  [2024-12-09 16:20:11.924450] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63717 ]
00:09:42.972  [2024-12-09 16:20:12.112927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:43.233  [2024-12-09 16:20:12.233541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:43.233  [2024-12-09 16:20:12.233616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:43.233  [2024-12-09 16:20:12.233658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:43.800   16:20:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:43.800   16:20:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:09:43.800   16:20:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:09:44.059  I/O targets:
00:09:44.059    Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB)
00:09:44.059    Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB)
00:09:44.059    Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB)
00:09:44.059    Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB)
00:09:44.059    Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB)
00:09:44.059    Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB)
00:09:44.059    Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB)
00:09:44.059  
00:09:44.059  
00:09:44.059       CUnit - A unit testing framework for C - Version 2.1-3
00:09:44.059       http://cunit.sourceforge.net/
00:09:44.059  
00:09:44.059  
00:09:44.059  Suite: bdevio tests on: Nvme3n1
00:09:44.059    Test: blockdev write read block ...passed
00:09:44.059    Test: blockdev write zeroes read block ...passed
00:09:44.059    Test: blockdev write zeroes read no split ...passed
00:09:44.059    Test: blockdev write zeroes read split ...passed
00:09:44.059    Test: blockdev write zeroes read split partial ...passed
00:09:44.059    Test: blockdev reset ...[2024-12-09 16:20:13.088715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller
00:09:44.059  [2024-12-09 16:20:13.092416] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed
00:09:44.059    Test: blockdev write read 8 blocks ...uccessful.
00:09:44.059  passed
00:09:44.059    Test: blockdev write read size > 128k ...passed
00:09:44.059    Test: blockdev write read invalid size ...passed
00:09:44.059    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:44.059    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:44.059    Test: blockdev write read max offset ...passed
00:09:44.059    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:44.059    Test: blockdev writev readv 8 blocks ...passed
00:09:44.059    Test: blockdev writev readv 30 x 1block ...passed
00:09:44.059    Test: blockdev writev readv block ...passed
00:09:44.059    Test: blockdev writev readv size > 128k ...passed
00:09:44.059    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:44.059    Test: blockdev comparev and writev ...[2024-12-09 16:20:13.102224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0204000 len:0x1000
00:09:44.059  [2024-12-09 16:20:13.102275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:44.059  passed
00:09:44.059    Test: blockdev nvme passthru rw ...passed
00:09:44.059    Test: blockdev nvme passthru vendor specific ...passed
00:09:44.059    Test: blockdev nvme admin passthru ...[2024-12-09 16:20:13.103186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:09:44.059  [2024-12-09 16:20:13.103228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:09:44.059  passed
00:09:44.059    Test: blockdev copy ...passed
00:09:44.059  Suite: bdevio tests on: Nvme2n3
00:09:44.060    Test: blockdev write read block ...passed
00:09:44.060    Test: blockdev write zeroes read block ...passed
00:09:44.060    Test: blockdev write zeroes read no split ...passed
00:09:44.060    Test: blockdev write zeroes read split ...passed
00:09:44.060    Test: blockdev write zeroes read split partial ...passed
00:09:44.060    Test: blockdev reset ...[2024-12-09 16:20:13.180326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:09:44.060  [2024-12-09 16:20:13.184459] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed
00:09:44.060    Test: blockdev write read 8 blocks ...uccessful.
00:09:44.060  passed
00:09:44.060    Test: blockdev write read size > 128k ...passed
00:09:44.060    Test: blockdev write read invalid size ...passed
00:09:44.060    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:44.060    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:44.060    Test: blockdev write read max offset ...passed
00:09:44.060    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:44.060    Test: blockdev writev readv 8 blocks ...passed
00:09:44.060    Test: blockdev writev readv 30 x 1block ...passed
00:09:44.060    Test: blockdev writev readv block ...passed
00:09:44.060    Test: blockdev writev readv size > 128k ...passed
00:09:44.060    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:44.060    Test: blockdev comparev and writev ...[2024-12-09 16:20:13.194653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0202000 len:0x1000
00:09:44.060  [2024-12-09 16:20:13.194827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:44.060  passed
00:09:44.060    Test: blockdev nvme passthru rw ...passed
00:09:44.060    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:20:13.196010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:09:44.060  [2024-12-09 16:20:13.196180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:09:44.060  passed
00:09:44.060    Test: blockdev nvme admin passthru ...passed
00:09:44.060    Test: blockdev copy ...passed
00:09:44.060  Suite: bdevio tests on: Nvme2n2
00:09:44.060    Test: blockdev write read block ...passed
00:09:44.060    Test: blockdev write zeroes read block ...passed
00:09:44.060    Test: blockdev write zeroes read no split ...passed
00:09:44.319    Test: blockdev write zeroes read split ...passed
00:09:44.319    Test: blockdev write zeroes read split partial ...passed
00:09:44.319    Test: blockdev reset ...[2024-12-09 16:20:13.272227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:09:44.320  passed
00:09:44.320    Test: blockdev write read 8 blocks ...[2024-12-09 16:20:13.276489] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:09:44.320  passed
00:09:44.320    Test: blockdev write read size > 128k ...passed
00:09:44.320    Test: blockdev write read invalid size ...passed
00:09:44.320    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:44.320    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:44.320    Test: blockdev write read max offset ...passed
00:09:44.320    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:44.320    Test: blockdev writev readv 8 blocks ...passed
00:09:44.320    Test: blockdev writev readv 30 x 1block ...passed
00:09:44.320    Test: blockdev writev readv block ...passed
00:09:44.320    Test: blockdev writev readv size > 128k ...passed
00:09:44.320    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:44.320    Test: blockdev comparev and writev ...[2024-12-09 16:20:13.285569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed
00:09:44.320    Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c4038000 len:0x1000
00:09:44.320  [2024-12-09 16:20:13.285729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:44.320  passed
00:09:44.320    Test: blockdev nvme passthru vendor specific ...passed
00:09:44.320    Test: blockdev nvme admin passthru ...[2024-12-09 16:20:13.286619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:09:44.320  [2024-12-09 16:20:13.286658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:09:44.320  passed
00:09:44.320    Test: blockdev copy ...passed
00:09:44.320  Suite: bdevio tests on: Nvme2n1
00:09:44.320    Test: blockdev write read block ...passed
00:09:44.320    Test: blockdev write zeroes read block ...passed
00:09:44.320    Test: blockdev write zeroes read no split ...passed
00:09:44.320    Test: blockdev write zeroes read split ...passed
00:09:44.320    Test: blockdev write zeroes read split partial ...passed
00:09:44.320    Test: blockdev reset ...[2024-12-09 16:20:13.364524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:09:44.320  [2024-12-09 16:20:13.368751] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful.
00:09:44.320  
00:09:44.320    Test: blockdev write read 8 blocks ...passed
00:09:44.320    Test: blockdev write read size > 128k ...passed
00:09:44.320    Test: blockdev write read invalid size ...passed
00:09:44.320    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:44.320    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:44.320    Test: blockdev write read max offset ...passed
00:09:44.320    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:44.320    Test: blockdev writev readv 8 blocks ...passed
00:09:44.320    Test: blockdev writev readv 30 x 1block ...passed
00:09:44.320    Test: blockdev writev readv block ...passed
00:09:44.320    Test: blockdev writev readv size > 128k ...passed
00:09:44.320    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:44.320    Test: blockdev comparev and writev ...[2024-12-09 16:20:13.379228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4034000 len:0x1000
00:09:44.320  [2024-12-09 16:20:13.379400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:44.320  passed
00:09:44.320    Test: blockdev nvme passthru rw ...passed
00:09:44.320    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:20:13.380626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:09:44.320  [2024-12-09 16:20:13.380788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:09:44.320  passed
00:09:44.320    Test: blockdev nvme admin passthru ...passed
00:09:44.320    Test: blockdev copy ...passed
00:09:44.320  Suite: bdevio tests on: Nvme1n1p2
00:09:44.320    Test: blockdev write read block ...passed
00:09:44.320    Test: blockdev write zeroes read block ...passed
00:09:44.320    Test: blockdev write zeroes read no split ...passed
00:09:44.320    Test: blockdev write zeroes read split ...passed
00:09:44.320    Test: blockdev write zeroes read split partial ...passed
00:09:44.320    Test: blockdev reset ...[2024-12-09 16:20:13.463343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:09:44.320  [2024-12-09 16:20:13.467016] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful.
00:09:44.320  
00:09:44.320    Test: blockdev write read 8 blocks ...passed
00:09:44.320    Test: blockdev write read size > 128k ...passed
00:09:44.320    Test: blockdev write read invalid size ...passed
00:09:44.320    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:44.320    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:44.320    Test: blockdev write read max offset ...passed
00:09:44.320    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:44.320    Test: blockdev writev readv 8 blocks ...passed
00:09:44.320    Test: blockdev writev readv 30 x 1block ...passed
00:09:44.320    Test: blockdev writev readv block ...passed
00:09:44.320    Test: blockdev writev readv size > 128k ...passed
00:09:44.320    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:44.320    Test: blockdev comparev and writev ...[2024-12-09 16:20:13.477127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c4030000 len:0x1000
00:09:44.320  [2024-12-09 16:20:13.477302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:44.320  passed
00:09:44.320    Test: blockdev nvme passthru rw ...passed
00:09:44.320    Test: blockdev nvme passthru vendor specific ...passed
00:09:44.320    Test: blockdev nvme admin passthru ...passed
00:09:44.320    Test: blockdev copy ...passed
00:09:44.320  Suite: bdevio tests on: Nvme1n1p1
00:09:44.320    Test: blockdev write read block ...passed
00:09:44.320    Test: blockdev write zeroes read block ...passed
00:09:44.320    Test: blockdev write zeroes read no split ...passed
00:09:44.579    Test: blockdev write zeroes read split ...passed
00:09:44.580    Test: blockdev write zeroes read split partial ...passed
00:09:44.580    Test: blockdev reset ...[2024-12-09 16:20:13.547403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:09:44.580  [2024-12-09 16:20:13.551162] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed
00:09:44.580    Test: blockdev write read 8 blocks ...uccessful.
00:09:44.580  passed
00:09:44.580    Test: blockdev write read size > 128k ...passed
00:09:44.580    Test: blockdev write read invalid size ...passed
00:09:44.580    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:44.580    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:44.580    Test: blockdev write read max offset ...passed
00:09:44.580    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:44.580    Test: blockdev writev readv 8 blocks ...passed
00:09:44.580    Test: blockdev writev readv 30 x 1block ...passed
00:09:44.580    Test: blockdev writev readv block ...passed
00:09:44.580    Test: blockdev writev readv size > 128k ...passed
00:09:44.580    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:44.580    Test: blockdev comparev and writev ...[2024-12-09 16:20:13.560967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b040e000 len:0x1000
00:09:44.580  [2024-12-09 16:20:13.561018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:44.580  passed
00:09:44.580    Test: blockdev nvme passthru rw ...passed
00:09:44.580    Test: blockdev nvme passthru vendor specific ...passed
00:09:44.580    Test: blockdev nvme admin passthru ...passed
00:09:44.580    Test: blockdev copy ...passed
00:09:44.580  Suite: bdevio tests on: Nvme0n1
00:09:44.580    Test: blockdev write read block ...passed
00:09:44.580    Test: blockdev write zeroes read block ...passed
00:09:44.580    Test: blockdev write zeroes read no split ...passed
00:09:44.580    Test: blockdev write zeroes read split ...passed
00:09:44.580    Test: blockdev write zeroes read split partial ...passed
00:09:44.580    Test: blockdev reset ...[2024-12-09 16:20:13.650428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:09:44.580  [2024-12-09 16:20:13.654022] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed
00:09:44.580    Test: blockdev write read 8 blocks ...uccessful.
00:09:44.580  passed
00:09:44.580    Test: blockdev write read size > 128k ...passed
00:09:44.580    Test: blockdev write read invalid size ...passed
00:09:44.580    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:44.580    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:44.580    Test: blockdev write read max offset ...passed
00:09:44.580    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:44.580    Test: blockdev writev readv 8 blocks ...passed
00:09:44.580    Test: blockdev writev readv 30 x 1block ...passed
00:09:44.580    Test: blockdev writev readv block ...passed
00:09:44.580    Test: blockdev writev readv size > 128k ...passed
00:09:44.580    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:44.580    Test: blockdev comparev and writev ...passed
00:09:44.580    Test: blockdev nvme passthru rw ...[2024-12-09 16:20:13.662238] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has
00:09:44.580  separate metadata which is not supported yet.
00:09:44.580  passed
00:09:44.580    Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:20:13.662828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed
00:09:44.580    Test: blockdev nvme admin passthru ...RP2 0x0
00:09:44.580  [2024-12-09 16:20:13.662976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1
00:09:44.580  passed
00:09:44.580    Test: blockdev copy ...passed
00:09:44.580  
00:09:44.580  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:09:44.580                suites      7      7    n/a      0        0
00:09:44.580                 tests    161    161    161      0        0
00:09:44.580               asserts   1025   1025   1025      0      n/a
00:09:44.580  
00:09:44.580  Elapsed time =    1.765 seconds
00:09:44.580  0
00:09:44.580   16:20:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63717
00:09:44.580   16:20:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63717 ']'
00:09:44.580   16:20:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63717
00:09:44.580    16:20:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:09:44.580   16:20:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:44.580    16:20:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63717
00:09:44.580  killing process with pid 63717
00:09:44.580   16:20:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:44.580   16:20:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:44.580   16:20:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63717'
00:09:44.580   16:20:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63717
00:09:44.580   16:20:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63717
00:09:45.960  ************************************
00:09:45.960  END TEST bdev_bounds
00:09:45.960  ************************************
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:09:45.960  
00:09:45.960  real	0m2.968s
00:09:45.960  user	0m7.564s
00:09:45.960  sys	0m0.431s
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:09:45.960   16:20:14 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:09:45.960   16:20:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:09:45.960   16:20:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:45.960   16:20:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:45.960  ************************************
00:09:45.960  START TEST bdev_nbd
00:09:45.960  ************************************
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:09:45.960    16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63782
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63782 /var/tmp/spdk-nbd.sock
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63782 ']'
00:09:45.960  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:45.960   16:20:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:09:45.960  [2024-12-09 16:20:14.967826] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:09:45.960  [2024-12-09 16:20:14.968200] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:46.219  [2024-12-09 16:20:15.150772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:46.219  [2024-12-09 16:20:15.264372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:09:46.800   16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:46.800    16:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:09:47.058    16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:47.058  1+0 records in
00:09:47.058  1+0 records out
00:09:47.058  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115012 s, 3.6 MB/s
00:09:47.058    16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:47.058   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:47.058    16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:09:47.317    16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:47.317  1+0 records in
00:09:47.317  1+0 records out
00:09:47.317  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731858 s, 5.6 MB/s
00:09:47.317    16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:47.317   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:47.318   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:47.318   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:47.318   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:47.318   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:47.318    16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:09:47.577    16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:47.577  1+0 records in
00:09:47.577  1+0 records out
00:09:47.577  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00180419 s, 2.3 MB/s
00:09:47.577    16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:47.577   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:47.577    16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1
00:09:47.836   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:09:47.836    16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:09:47.836   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:09:47.836   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:09:47.836   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:47.836   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:47.836   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:47.836   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:09:47.836   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:47.836   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:47.836   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:47.836   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:47.836  1+0 records in
00:09:47.836  1+0 records out
00:09:47.837  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734677 s, 5.6 MB/s
00:09:47.837    16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:47.837   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:47.837   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:47.837   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:47.837   16:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:47.837   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:47.837   16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:47.837    16:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:09:48.096    16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:48.096  1+0 records in
00:09:48.096  1+0 records out
00:09:48.096  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785519 s, 5.2 MB/s
00:09:48.096    16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:48.096   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:48.096    16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:09:48.355    16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:48.355  1+0 records in
00:09:48.355  1+0 records out
00:09:48.355  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000858475 s, 4.8 MB/s
00:09:48.355    16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:48.355   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:48.355    16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6
00:09:48.614    16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:48.614  1+0 records in
00:09:48.614  1+0 records out
00:09:48.614  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767228 s, 5.3 MB/s
00:09:48.614    16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:48.614   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:48.873   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:48.873   16:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:48.873   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:48.873   16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:48.873    16:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:48.873   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:09:48.873    {
00:09:48.873      "nbd_device": "/dev/nbd0",
00:09:48.874      "bdev_name": "Nvme0n1"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd1",
00:09:48.874      "bdev_name": "Nvme1n1p1"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd2",
00:09:48.874      "bdev_name": "Nvme1n1p2"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd3",
00:09:48.874      "bdev_name": "Nvme2n1"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd4",
00:09:48.874      "bdev_name": "Nvme2n2"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd5",
00:09:48.874      "bdev_name": "Nvme2n3"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd6",
00:09:48.874      "bdev_name": "Nvme3n1"
00:09:48.874    }
00:09:48.874  ]'
00:09:48.874   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:09:48.874    16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd0",
00:09:48.874      "bdev_name": "Nvme0n1"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd1",
00:09:48.874      "bdev_name": "Nvme1n1p1"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd2",
00:09:48.874      "bdev_name": "Nvme1n1p2"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd3",
00:09:48.874      "bdev_name": "Nvme2n1"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd4",
00:09:48.874      "bdev_name": "Nvme2n2"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd5",
00:09:48.874      "bdev_name": "Nvme2n3"
00:09:48.874    },
00:09:48.874    {
00:09:48.874      "nbd_device": "/dev/nbd6",
00:09:48.874      "bdev_name": "Nvme3n1"
00:09:48.874    }
00:09:48.874  ]'
00:09:48.874    16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:09:48.874   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6'
00:09:48.874   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:48.874   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6')
00:09:48.874   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:48.874   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:09:48.874   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:48.874   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:49.133    16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:49.133   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:49.133   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:49.133   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:49.133   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:49.133   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:49.133   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:49.133   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:49.133   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:49.133   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:49.392    16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:49.392   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:49.392   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:49.392   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:49.392   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:49.392   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:49.392   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:49.392   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:49.392   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:49.392   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:09:49.651    16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:09:49.651   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:09:49.651   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:09:49.651   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:49.651   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:49.651   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:09:49.651   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:49.651   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:49.651   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:49.651   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:09:49.910    16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:09:49.910   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:09:49.910   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:09:49.910   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:49.910   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:49.910   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:09:49.910   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:49.910   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:49.910   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:49.910   16:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:09:50.169    16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:09:50.169    16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:09:50.169   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:50.429   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:50.429   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:50.429   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:09:50.429    16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:09:50.429   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:09:50.429   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:09:50.429   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:50.429   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:50.429   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:09:50.429   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:50.429   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:50.429    16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:50.429    16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:50.429     16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:50.687    16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:50.687     16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:50.687     16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:50.687    16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:50.687     16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:50.688     16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:09:50.688     16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:09:50.688    16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:09:50.688    16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14'
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14'
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:50.688   16:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:09:50.947  /dev/nbd0
00:09:50.947    16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:50.947  1+0 records in
00:09:50.947  1+0 records out
00:09:50.947  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382103 s, 10.7 MB/s
00:09:50.947    16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:50.947   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1
00:09:51.207  /dev/nbd1
00:09:51.207    16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:51.207  1+0 records in
00:09:51.207  1+0 records out
00:09:51.207  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485335 s, 8.4 MB/s
00:09:51.207    16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:51.207   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10
00:09:51.466  /dev/nbd10
00:09:51.466    16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:51.466  1+0 records in
00:09:51.466  1+0 records out
00:09:51.466  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719199 s, 5.7 MB/s
00:09:51.466    16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:51.466   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11
00:09:51.725  /dev/nbd11
00:09:51.985    16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:51.985  1+0 records in
00:09:51.985  1+0 records out
00:09:51.985  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583377 s, 7.0 MB/s
00:09:51.985    16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:51.985   16:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12
00:09:51.985  /dev/nbd12
00:09:52.245    16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:09:52.245   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:09:52.245   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:09:52.245   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:52.245   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:52.246  1+0 records in
00:09:52.246  1+0 records out
00:09:52.246  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084366 s, 4.9 MB/s
00:09:52.246    16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:52.246   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13
00:09:52.246  /dev/nbd13
00:09:52.505    16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:09:52.505   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:09:52.505   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:09:52.505   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:52.505   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:52.505   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:52.505   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:09:52.505   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:52.505   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:52.505   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:52.505   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:52.505  1+0 records in
00:09:52.506  1+0 records out
00:09:52.506  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642797 s, 6.4 MB/s
00:09:52.506    16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14
00:09:52.506  /dev/nbd14
00:09:52.506    16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:52.506   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:52.766  1+0 records in
00:09:52.766  1+0 records out
00:09:52.766  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000855748 s, 4.8 MB/s
00:09:52.767    16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:52.767   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:52.767   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:52.767   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:52.767   16:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:52.767   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:52.767   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:52.767    16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:52.767    16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:52.767     16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:52.767    16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd0",
00:09:52.767      "bdev_name": "Nvme0n1"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd1",
00:09:52.767      "bdev_name": "Nvme1n1p1"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd10",
00:09:52.767      "bdev_name": "Nvme1n1p2"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd11",
00:09:52.767      "bdev_name": "Nvme2n1"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd12",
00:09:52.767      "bdev_name": "Nvme2n2"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd13",
00:09:52.767      "bdev_name": "Nvme2n3"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd14",
00:09:52.767      "bdev_name": "Nvme3n1"
00:09:52.767    }
00:09:52.767  ]'
00:09:52.767     16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd0",
00:09:52.767      "bdev_name": "Nvme0n1"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd1",
00:09:52.767      "bdev_name": "Nvme1n1p1"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd10",
00:09:52.767      "bdev_name": "Nvme1n1p2"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd11",
00:09:52.767      "bdev_name": "Nvme2n1"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd12",
00:09:52.767      "bdev_name": "Nvme2n2"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd13",
00:09:52.767      "bdev_name": "Nvme2n3"
00:09:52.767    },
00:09:52.767    {
00:09:52.767      "nbd_device": "/dev/nbd14",
00:09:52.767      "bdev_name": "Nvme3n1"
00:09:52.767    }
00:09:52.767  ]'
00:09:52.767     16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:53.026    16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:53.026  /dev/nbd1
00:09:53.026  /dev/nbd10
00:09:53.026  /dev/nbd11
00:09:53.026  /dev/nbd12
00:09:53.026  /dev/nbd13
00:09:53.026  /dev/nbd14'
00:09:53.026     16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:53.026  /dev/nbd1
00:09:53.026  /dev/nbd10
00:09:53.026  /dev/nbd11
00:09:53.026  /dev/nbd12
00:09:53.026  /dev/nbd13
00:09:53.026  /dev/nbd14'
00:09:53.026     16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:53.026    16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7
00:09:53.026    16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7
00:09:53.026   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7
00:09:53.026   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']'
00:09:53.026   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write
00:09:53.026   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:53.026   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:53.026   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:53.026   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:09:53.026   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:53.026   16:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:09:53.026  256+0 records in
00:09:53.026  256+0 records out
00:09:53.026  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011833 s, 88.6 MB/s
00:09:53.026   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:53.026   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:53.026  256+0 records in
00:09:53.026  256+0 records out
00:09:53.026  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142792 s, 7.3 MB/s
00:09:53.026   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:53.027   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:53.286  256+0 records in
00:09:53.286  256+0 records out
00:09:53.286  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157642 s, 6.7 MB/s
00:09:53.286   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:53.286   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:09:53.545  256+0 records in
00:09:53.545  256+0 records out
00:09:53.545  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146215 s, 7.2 MB/s
00:09:53.545   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:53.545   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:09:53.545  256+0 records in
00:09:53.545  256+0 records out
00:09:53.545  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144166 s, 7.3 MB/s
00:09:53.545   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:53.545   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:09:53.804  256+0 records in
00:09:53.804  256+0 records out
00:09:53.804  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145448 s, 7.2 MB/s
00:09:53.804   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:53.804   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:09:53.804  256+0 records in
00:09:53.804  256+0 records out
00:09:53.804  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142517 s, 7.4 MB/s
00:09:53.804   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:53.804   16:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct
00:09:54.063  256+0 records in
00:09:54.063  256+0 records out
00:09:54.063  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147054 s, 7.1 MB/s
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:54.063   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14
00:09:54.064   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:09:54.064   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14'
00:09:54.064   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:54.064   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:54.064   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:54.064   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:09:54.064   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:54.064   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:54.323    16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:54.323   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:54.323   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:54.323   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:54.323   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:54.323   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:54.323   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:54.323   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:54.323   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:54.323   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:54.583    16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:54.583   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:54.583   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:54.583   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:54.584   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:54.584   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:54.584   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:54.584   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:54.584   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:54.584   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:09:54.843    16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:09:54.843    16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:54.843   16:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:09:55.102    16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:09:55.102   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:09:55.102   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:09:55.102   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:55.102   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:55.102   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:09:55.102   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:55.102   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:55.102   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:55.102   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:09:55.361    16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:09:55.361   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:09:55.361   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:09:55.361   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:55.361   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:55.361   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:09:55.361   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:55.361   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:55.361   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:55.361   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:09:55.620    16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:09:55.620   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:09:55.620   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:09:55.620   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:55.620   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:55.620   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:09:55.620   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:55.620   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:55.620    16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:55.620    16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:55.620     16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:55.880    16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:55.880     16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:55.880     16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:55.880    16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:55.880     16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:55.880     16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:09:55.880     16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:09:55.880    16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:09:55.880    16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:09:55.880   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:09:55.880   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:55.880   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:09:55.880   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:09:55.880   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:55.880   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:09:55.880   16:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:09:56.138  malloc_lvol_verify
00:09:56.138   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:09:56.397  1a4ad7b2-8a98-4fcd-bc0e-8d901bbbbb0f
00:09:56.397   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:09:56.656  c957bcd1-d1a2-4d64-9072-cf6f925dfcb6
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:09:56.656  /dev/nbd0
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:09:56.656  mke2fs 1.47.0 (5-Feb-2023)
00:09:56.656  Discarding device blocks:    0/4096         done                            
00:09:56.656  Creating filesystem with 4096 1k blocks and 1024 inodes
00:09:56.656  
00:09:56.656  Allocating group tables: 0/1   done                            
00:09:56.656  Writing inode tables: 0/1   done                            
00:09:56.656  Creating journal (1024 blocks): done
00:09:56.656  Writing superblocks and filesystem accounting information: 0/1   done
00:09:56.656  
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:56.656   16:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:56.915    16:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63782
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63782 ']'
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63782
00:09:56.915    16:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:56.915    16:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63782
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:56.915  killing process with pid 63782
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63782'
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63782
00:09:56.915   16:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63782
00:09:58.294   16:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:09:58.294  
00:09:58.294  real	0m12.406s
00:09:58.294  user	0m15.816s
00:09:58.294  sys	0m5.360s
00:09:58.294  ************************************
00:09:58.294  END TEST bdev_nbd
00:09:58.294  ************************************
00:09:58.294   16:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:58.294   16:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:09:58.294   16:20:27 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]]
00:09:58.294   16:20:27 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']'
00:09:58.294   16:20:27 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']'
00:09:58.294  skipping fio tests on NVMe due to multi-ns failures.
00:09:58.294   16:20:27 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:09:58.294   16:20:27 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT
00:09:58.294   16:20:27 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:09:58.294   16:20:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:09:58.294   16:20:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:58.294   16:20:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:58.294  ************************************
00:09:58.294  START TEST bdev_verify
00:09:58.294  ************************************
00:09:58.294   16:20:27 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:09:58.294  [2024-12-09 16:20:27.437821] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:09:58.294  [2024-12-09 16:20:27.437960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64204 ]
00:09:58.553  [2024-12-09 16:20:27.621883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:58.812  [2024-12-09 16:20:27.742401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:58.812  [2024-12-09 16:20:27.742429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:59.380  Running I/O for 5 seconds...
00:10:01.695      20992.00 IOPS,    82.00 MiB/s
[2024-12-09T16:20:31.810Z]     21376.00 IOPS,    83.50 MiB/s
[2024-12-09T16:20:32.747Z]     22016.00 IOPS,    86.00 MiB/s
[2024-12-09T16:20:33.685Z]     22720.00 IOPS,    88.75 MiB/s
[2024-12-09T16:20:33.685Z]     22476.80 IOPS,    87.80 MiB/s
00:10:04.506                                                                                                  Latency(us)
00:10:04.506  
[2024-12-09T16:20:33.685Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:04.506  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x0 length 0xbd0bd
00:10:04.506  	 Nvme0n1             :       5.04    1573.89       6.15       0.00     0.00   81026.38   19792.40   85907.43
00:10:04.506  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0xbd0bd length 0xbd0bd
00:10:04.506  	 Nvme0n1             :       5.07    1577.03       6.16       0.00     0.00   80753.41   15686.53   84644.09
00:10:04.506  Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x0 length 0x4ff80
00:10:04.506  	 Nvme1n1p1           :       5.08    1575.92       6.16       0.00     0.00   80710.96   10738.43   80854.05
00:10:04.506  Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x4ff80 length 0x4ff80
00:10:04.506  	 Nvme1n1p1           :       5.07    1576.59       6.16       0.00     0.00   80635.94   15791.81   78327.36
00:10:04.506  Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x0 length 0x4ff7f
00:10:04.506  	 Nvme1n1p2           :       5.08    1575.40       6.15       0.00     0.00   80543.56   10948.99   72431.76
00:10:04.506  Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x4ff7f length 0x4ff7f
00:10:04.506  	 Nvme1n1p2           :       5.09    1585.00       6.19       0.00     0.00   80293.95   10843.71   69062.84
00:10:04.506  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x0 length 0x80000
00:10:04.506  	 Nvme2n1             :       5.09    1583.99       6.19       0.00     0.00   80170.22    9738.28   68220.61
00:10:04.506  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x80000 length 0x80000
00:10:04.506  	 Nvme2n1             :       5.09    1584.57       6.19       0.00     0.00   80204.49   10948.99   65693.92
00:10:04.506  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x0 length 0x80000
00:10:04.506  	 Nvme2n2             :       5.09    1583.49       6.19       0.00     0.00   80045.78   10054.12   69062.84
00:10:04.506  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x80000 length 0x80000
00:10:04.506  	 Nvme2n2             :       5.09    1584.14       6.19       0.00     0.00   80097.47   11212.18   64009.46
00:10:04.506  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x0 length 0x80000
00:10:04.506  	 Nvme2n3             :       5.09    1582.98       6.18       0.00     0.00   79917.73   10369.95   69062.84
00:10:04.506  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x80000 length 0x80000
00:10:04.506  	 Nvme2n3             :       5.09    1583.69       6.19       0.00     0.00   79966.30   11738.58   64851.69
00:10:04.506  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x0 length 0x20000
00:10:04.506  	 Nvme3n1             :       5.10    1582.53       6.18       0.00     0.00   79809.10   10527.87   68641.72
00:10:04.506  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:04.506  	 Verification LBA range: start 0x20000 length 0x20000
00:10:04.506  	 Nvme3n1             :       5.09    1583.19       6.18       0.00     0.00   79836.80   11580.66   68220.61
00:10:04.506  
[2024-12-09T16:20:33.685Z]  ===================================================================================================================
00:10:04.506  
[2024-12-09T16:20:33.685Z]  Total                       :              22132.40      86.45       0.00     0.00   80284.88    9738.28   85907.43
00:10:05.886  
00:10:05.886  real	0m7.567s
00:10:05.886  user	0m13.979s
00:10:05.886  sys	0m0.310s
00:10:05.886   16:20:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:05.886   16:20:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:10:05.886  ************************************
00:10:05.886  END TEST bdev_verify
00:10:05.886  ************************************
00:10:05.886   16:20:34 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:10:05.886   16:20:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:10:05.886   16:20:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:05.886   16:20:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:10:05.886  ************************************
00:10:05.886  START TEST bdev_verify_big_io
00:10:05.886  ************************************
00:10:05.886   16:20:34 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:10:06.144  [2024-12-09 16:20:35.079669] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:10:06.144  [2024-12-09 16:20:35.079779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64302 ]
00:10:06.144  [2024-12-09 16:20:35.262295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:06.403  [2024-12-09 16:20:35.374356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:06.403  [2024-12-09 16:20:35.374384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:07.341  Running I/O for 5 seconds...
00:10:11.254       2004.00 IOPS,   125.25 MiB/s
[2024-12-09T16:20:41.371Z]      2965.50 IOPS,   185.34 MiB/s
[2024-12-09T16:20:42.309Z]      2766.00 IOPS,   172.88 MiB/s
[2024-12-09T16:20:42.309Z]      2934.00 IOPS,   183.38 MiB/s
00:10:13.130                                                                                                  Latency(us)
00:10:13.130  
[2024-12-09T16:20:42.309Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:13.130  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x0 length 0xbd0b
00:10:13.130  	 Nvme0n1             :       5.67     147.02       9.19       0.00     0.00  839605.40   19687.12  869181.07
00:10:13.130  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0xbd0b length 0xbd0b
00:10:13.130  	 Nvme0n1             :       5.64     145.33       9.08       0.00     0.00  853206.34   29478.04 1158908.09
00:10:13.130  Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x0 length 0x4ff8
00:10:13.130  	 Nvme1n1p1           :       5.67     152.11       9.51       0.00     0.00  788474.07   74116.22  731055.40
00:10:13.130  Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x4ff8 length 0x4ff8
00:10:13.130  	 Nvme1n1p1           :       5.64     153.46       9.59       0.00     0.00  789361.55   65272.80  771482.42
00:10:13.130  Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x0 length 0x4ff7
00:10:13.130  	 Nvme1n1p2           :       5.72     157.14       9.82       0.00     0.00  752304.13   77906.25  710841.88
00:10:13.130  Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x4ff7 length 0x4ff7
00:10:13.130  	 Nvme1n1p2           :       5.70     157.55       9.85       0.00     0.00  754255.87   63167.23  778220.26
00:10:13.130  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x0 length 0x8000
00:10:13.130  	 Nvme2n1             :       5.67     147.93       9.25       0.00     0.00  784075.97   77064.02 1367781.06
00:10:13.130  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x8000 length 0x8000
00:10:13.130  	 Nvme2n1             :       5.70     157.48       9.84       0.00     0.00  737723.60   63167.23  734424.31
00:10:13.130  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x0 length 0x8000
00:10:13.130  	 Nvme2n2             :       5.77     158.15       9.88       0.00     0.00  721974.21   29478.04 1394732.41
00:10:13.130  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x8000 length 0x8000
00:10:13.130  	 Nvme2n2             :       5.70     160.94      10.06       0.00     0.00  710703.80   60219.42  747899.99
00:10:13.130  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x0 length 0x8000
00:10:13.130  	 Nvme2n3             :       5.78     163.73      10.23       0.00     0.00  684543.52   16844.59 1408208.09
00:10:13.130  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x8000 length 0x8000
00:10:13.130  	 Nvme2n3             :       5.76     173.58      10.85       0.00     0.00  649658.10   26740.79  764744.58
00:10:13.130  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:13.130  	 Verification LBA range: start 0x0 length 0x2000
00:10:13.131  	 Nvme3n1             :       5.78     174.27      10.89       0.00     0.00  629997.93    3895.31 1435159.44
00:10:13.131  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:13.131  	 Verification LBA range: start 0x2000 length 0x2000
00:10:13.131  	 Nvme3n1             :       5.77     181.75      11.36       0.00     0.00  607655.75    2974.12  788327.02
00:10:13.131  
[2024-12-09T16:20:42.310Z]  ===================================================================================================================
00:10:13.131  
[2024-12-09T16:20:42.310Z]  Total                       :               2230.44     139.40       0.00     0.00  730839.04    2974.12 1435159.44
00:10:15.098  
00:10:15.098  real	0m8.892s
00:10:15.098  user	0m16.607s
00:10:15.098  sys	0m0.341s
00:10:15.098   16:20:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:15.098   16:20:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:10:15.098  ************************************
00:10:15.098  END TEST bdev_verify_big_io
00:10:15.098  ************************************
00:10:15.098   16:20:43 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:15.098   16:20:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:10:15.098   16:20:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:15.098   16:20:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:10:15.098  ************************************
00:10:15.098  START TEST bdev_write_zeroes
00:10:15.098  ************************************
00:10:15.098   16:20:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:15.098  [2024-12-09 16:20:44.047485] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:10:15.098  [2024-12-09 16:20:44.047617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64422 ]
00:10:15.098  [2024-12-09 16:20:44.216387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:15.357  [2024-12-09 16:20:44.323231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:15.925  Running I/O for 1 seconds...
00:10:17.116      69440.00 IOPS,   271.25 MiB/s
00:10:17.116                                                                                                  Latency(us)
00:10:17.116  
[2024-12-09T16:20:46.295Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:17.116  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:17.116  	 Nvme0n1             :       1.02    9887.48      38.62       0.00     0.00   12919.61   11422.74   26319.68
00:10:17.116  Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:17.116  	 Nvme1n1p1           :       1.02    9877.79      38.59       0.00     0.00   12916.43   11370.10   26424.96
00:10:17.116  Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:17.116  	 Nvme1n1p2           :       1.02    9868.26      38.55       0.00     0.00   12899.63   11159.54   25266.89
00:10:17.116  Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:17.116  	 Nvme2n1             :       1.03    9859.28      38.51       0.00     0.00   12854.12   11264.82   22424.37
00:10:17.116  Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:17.116  	 Nvme2n2             :       1.03    9850.46      38.48       0.00     0.00   12850.67   11264.82   22319.09
00:10:17.116  Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:17.116  	 Nvme2n3             :       1.03    9841.69      38.44       0.00     0.00   12813.13    9896.20   21897.97
00:10:17.116  Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:17.116  	 Nvme3n1             :       1.03    9832.99      38.41       0.00     0.00   12762.08    8474.94   23371.87
00:10:17.116  
[2024-12-09T16:20:46.295Z]  ===================================================================================================================
00:10:17.116  
[2024-12-09T16:20:46.295Z]  Total                       :              69017.94     269.60       0.00     0.00   12859.38    8474.94   26424.96
00:10:18.054  
00:10:18.054  real	0m3.191s
00:10:18.054  user	0m2.819s
00:10:18.054  sys	0m0.256s
00:10:18.054   16:20:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:18.054   16:20:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:10:18.054  ************************************
00:10:18.054  END TEST bdev_write_zeroes
00:10:18.054  ************************************
00:10:18.054   16:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:18.054   16:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:10:18.054   16:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:18.054   16:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:10:18.313  ************************************
00:10:18.313  START TEST bdev_json_nonenclosed
00:10:18.313  ************************************
00:10:18.313   16:20:47 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:18.313  [2024-12-09 16:20:47.322276] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:10:18.313  [2024-12-09 16:20:47.322403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64475 ]
00:10:18.572  [2024-12-09 16:20:47.502248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:18.572  [2024-12-09 16:20:47.607853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:18.572  [2024-12-09 16:20:47.607953] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:10:18.572  [2024-12-09 16:20:47.607992] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:10:18.573  [2024-12-09 16:20:47.608004] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:10:18.832  
00:10:18.832  real	0m0.618s
00:10:18.832  user	0m0.371s
00:10:18.832  sys	0m0.142s
00:10:18.832   16:20:47 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:18.832  ************************************
00:10:18.832  END TEST bdev_json_nonenclosed
00:10:18.832  ************************************
00:10:18.832   16:20:47 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:10:18.832   16:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:18.832   16:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:10:18.832   16:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:18.832   16:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:10:18.832  ************************************
00:10:18.832  START TEST bdev_json_nonarray
00:10:18.832  ************************************
00:10:18.832   16:20:47 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:19.091  [2024-12-09 16:20:48.009738] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:10:19.091  [2024-12-09 16:20:48.009864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64500 ]
00:10:19.091  [2024-12-09 16:20:48.189390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:19.350  [2024-12-09 16:20:48.293929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:19.350  [2024-12-09 16:20:48.294038] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:10:19.350  [2024-12-09 16:20:48.294060] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:10:19.350  [2024-12-09 16:20:48.294073] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:10:19.609  ************************************
00:10:19.609  END TEST bdev_json_nonarray
00:10:19.609  ************************************
00:10:19.609  
00:10:19.609  real	0m0.610s
00:10:19.609  user	0m0.358s
00:10:19.609  sys	0m0.148s
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:10:19.609   16:20:48 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]]
00:10:19.609   16:20:48 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]]
00:10:19.609   16:20:48 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid
00:10:19.609   16:20:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:19.609   16:20:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:19.609   16:20:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:10:19.609  ************************************
00:10:19.609  START TEST bdev_gpt_uuid
00:10:19.609  ************************************
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64526
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64526
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 64526 ']'
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:19.609   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:19.609  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:19.610   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:19.610   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:19.610   16:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:19.610  [2024-12-09 16:20:48.718919] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:10:19.610  [2024-12-09 16:20:48.719052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64526 ]
00:10:19.870  [2024-12-09 16:20:48.900348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:19.870  [2024-12-09 16:20:49.009041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:20.809   16:20:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:20.809   16:20:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0
00:10:20.809   16:20:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:10:20.809   16:20:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:20.809   16:20:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:21.068  Some configs were skipped because the RPC state that can call them passed over.
00:10:21.068   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.068   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine
00:10:21.068   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.068   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:21.068   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.068    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030
00:10:21.068    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.068    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:21.068    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.068   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[
00:10:21.068  {
00:10:21.068  "name": "Nvme1n1p1",
00:10:21.068  "aliases": [
00:10:21.068  "6f89f330-603b-4116-ac73-2ca8eae53030"
00:10:21.068  ],
00:10:21.068  "product_name": "GPT Disk",
00:10:21.068  "block_size": 4096,
00:10:21.068  "num_blocks": 655104,
00:10:21.068  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:10:21.068  "assigned_rate_limits": {
00:10:21.068  "rw_ios_per_sec": 0,
00:10:21.068  "rw_mbytes_per_sec": 0,
00:10:21.068  "r_mbytes_per_sec": 0,
00:10:21.068  "w_mbytes_per_sec": 0
00:10:21.068  },
00:10:21.068  "claimed": false,
00:10:21.068  "zoned": false,
00:10:21.068  "supported_io_types": {
00:10:21.068  "read": true,
00:10:21.068  "write": true,
00:10:21.068  "unmap": true,
00:10:21.068  "flush": true,
00:10:21.068  "reset": true,
00:10:21.068  "nvme_admin": false,
00:10:21.068  "nvme_io": false,
00:10:21.068  "nvme_io_md": false,
00:10:21.068  "write_zeroes": true,
00:10:21.068  "zcopy": false,
00:10:21.068  "get_zone_info": false,
00:10:21.068  "zone_management": false,
00:10:21.068  "zone_append": false,
00:10:21.068  "compare": true,
00:10:21.068  "compare_and_write": false,
00:10:21.068  "abort": true,
00:10:21.068  "seek_hole": false,
00:10:21.068  "seek_data": false,
00:10:21.068  "copy": true,
00:10:21.068  "nvme_iov_md": false
00:10:21.068  },
00:10:21.068  "driver_specific": {
00:10:21.068  "gpt": {
00:10:21.068  "base_bdev": "Nvme1n1",
00:10:21.068  "offset_blocks": 256,
00:10:21.068  "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",
00:10:21.068  "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:10:21.068  "partition_name": "SPDK_TEST_first"
00:10:21.068  }
00:10:21.068  }
00:10:21.068  }
00:10:21.068  ]'
00:10:21.068    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length
00:10:21.328   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]]
00:10:21.328    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]'
00:10:21.328   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:10:21.328    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:10:21.328   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:10:21.328    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df
00:10:21.328    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:21.328    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:21.328    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:21.328   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[
00:10:21.328  {
00:10:21.328  "name": "Nvme1n1p2",
00:10:21.328  "aliases": [
00:10:21.328  "abf1734f-66e5-4c0f-aa29-4021d4d307df"
00:10:21.328  ],
00:10:21.328  "product_name": "GPT Disk",
00:10:21.328  "block_size": 4096,
00:10:21.328  "num_blocks": 655103,
00:10:21.328  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:10:21.328  "assigned_rate_limits": {
00:10:21.328  "rw_ios_per_sec": 0,
00:10:21.328  "rw_mbytes_per_sec": 0,
00:10:21.328  "r_mbytes_per_sec": 0,
00:10:21.328  "w_mbytes_per_sec": 0
00:10:21.328  },
00:10:21.328  "claimed": false,
00:10:21.328  "zoned": false,
00:10:21.328  "supported_io_types": {
00:10:21.328  "read": true,
00:10:21.328  "write": true,
00:10:21.328  "unmap": true,
00:10:21.328  "flush": true,
00:10:21.328  "reset": true,
00:10:21.328  "nvme_admin": false,
00:10:21.328  "nvme_io": false,
00:10:21.328  "nvme_io_md": false,
00:10:21.328  "write_zeroes": true,
00:10:21.328  "zcopy": false,
00:10:21.328  "get_zone_info": false,
00:10:21.328  "zone_management": false,
00:10:21.328  "zone_append": false,
00:10:21.328  "compare": true,
00:10:21.328  "compare_and_write": false,
00:10:21.328  "abort": true,
00:10:21.328  "seek_hole": false,
00:10:21.328  "seek_data": false,
00:10:21.328  "copy": true,
00:10:21.328  "nvme_iov_md": false
00:10:21.328  },
00:10:21.328  "driver_specific": {
00:10:21.328  "gpt": {
00:10:21.328  "base_bdev": "Nvme1n1",
00:10:21.328  "offset_blocks": 655360,
00:10:21.328  "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",
00:10:21.328  "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:10:21.328  "partition_name": "SPDK_TEST_second"
00:10:21.328  }
00:10:21.328  }
00:10:21.328  }
00:10:21.328  ]'
00:10:21.328    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length
00:10:21.328   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]]
00:10:21.328    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]'
00:10:21.328   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:10:21.328    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:10:21.328   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:10:21.328   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 64526
00:10:21.328   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 64526 ']'
00:10:21.328   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 64526
00:10:21.328    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname
00:10:21.328   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:21.328    16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64526
00:10:21.587   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:21.587  killing process with pid 64526
00:10:21.587   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:21.587   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64526'
00:10:21.587   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 64526
00:10:21.587   16:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 64526
00:10:24.125  
00:10:24.125  real	0m4.197s
00:10:24.125  user	0m4.258s
00:10:24.125  sys	0m0.539s
00:10:24.125   16:20:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:24.125  ************************************
00:10:24.125  END TEST bdev_gpt_uuid
00:10:24.125  ************************************
00:10:24.125   16:20:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:24.125   16:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]]
00:10:24.125   16:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT
00:10:24.125   16:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup
00:10:24.125   16:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:10:24.126   16:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:10:24.126   16:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]]
00:10:24.126   16:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]]
00:10:24.126   16:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]]
00:10:24.126   16:20:52 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:10:24.386  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:10:24.645  Waiting for block devices as requested
00:10:24.645  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:10:24.905  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:10:24.905  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:10:25.164  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:10:30.438  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:10:30.438   16:20:59 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]]
00:10:30.438   16:20:59 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1
00:10:30.438  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:10:30.438  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:10:30.438  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:10:30.438  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:10:30.438   16:20:59 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]]
00:10:30.438  
00:10:30.438  real	1m4.493s
00:10:30.438  user	1m19.737s
00:10:30.438  sys	0m12.054s
00:10:30.438   16:20:59 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:30.438   16:20:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:10:30.438  ************************************
00:10:30.438  END TEST blockdev_nvme_gpt
00:10:30.438  ************************************
00:10:30.438   16:20:59  -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:10:30.438   16:20:59  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:30.438   16:20:59  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:30.438   16:20:59  -- common/autotest_common.sh@10 -- # set +x
00:10:30.438  ************************************
00:10:30.438  START TEST nvme
00:10:30.438  ************************************
00:10:30.438   16:20:59 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:10:30.698  * Looking for test storage...
00:10:30.698  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:10:30.698    16:20:59 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:30.698     16:20:59 nvme -- common/autotest_common.sh@1711 -- # lcov --version
00:10:30.698     16:20:59 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:30.698    16:20:59 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:30.698    16:20:59 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:30.698    16:20:59 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:30.698    16:20:59 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:30.698    16:20:59 nvme -- scripts/common.sh@336 -- # IFS=.-:
00:10:30.698    16:20:59 nvme -- scripts/common.sh@336 -- # read -ra ver1
00:10:30.698    16:20:59 nvme -- scripts/common.sh@337 -- # IFS=.-:
00:10:30.698    16:20:59 nvme -- scripts/common.sh@337 -- # read -ra ver2
00:10:30.698    16:20:59 nvme -- scripts/common.sh@338 -- # local 'op=<'
00:10:30.698    16:20:59 nvme -- scripts/common.sh@340 -- # ver1_l=2
00:10:30.698    16:20:59 nvme -- scripts/common.sh@341 -- # ver2_l=1
00:10:30.698    16:20:59 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:30.698    16:20:59 nvme -- scripts/common.sh@344 -- # case "$op" in
00:10:30.698    16:20:59 nvme -- scripts/common.sh@345 -- # : 1
00:10:30.698    16:20:59 nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:30.698    16:20:59 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:30.698     16:20:59 nvme -- scripts/common.sh@365 -- # decimal 1
00:10:30.698     16:20:59 nvme -- scripts/common.sh@353 -- # local d=1
00:10:30.698     16:20:59 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:30.698     16:20:59 nvme -- scripts/common.sh@355 -- # echo 1
00:10:30.698    16:20:59 nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:10:30.698     16:20:59 nvme -- scripts/common.sh@366 -- # decimal 2
00:10:30.698     16:20:59 nvme -- scripts/common.sh@353 -- # local d=2
00:10:30.698     16:20:59 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:30.698     16:20:59 nvme -- scripts/common.sh@355 -- # echo 2
00:10:30.698    16:20:59 nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:10:30.698    16:20:59 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:30.698    16:20:59 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:30.698    16:20:59 nvme -- scripts/common.sh@368 -- # return 0
00:10:30.698    16:20:59 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:30.698    16:20:59 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:30.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:30.698  		--rc genhtml_branch_coverage=1
00:10:30.698  		--rc genhtml_function_coverage=1
00:10:30.698  		--rc genhtml_legend=1
00:10:30.698  		--rc geninfo_all_blocks=1
00:10:30.698  		--rc geninfo_unexecuted_blocks=1
00:10:30.698  		
00:10:30.698  		'
00:10:30.698    16:20:59 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:30.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:30.698  		--rc genhtml_branch_coverage=1
00:10:30.698  		--rc genhtml_function_coverage=1
00:10:30.698  		--rc genhtml_legend=1
00:10:30.698  		--rc geninfo_all_blocks=1
00:10:30.698  		--rc geninfo_unexecuted_blocks=1
00:10:30.698  		
00:10:30.698  		'
00:10:30.698    16:20:59 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:30.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:30.698  		--rc genhtml_branch_coverage=1
00:10:30.698  		--rc genhtml_function_coverage=1
00:10:30.698  		--rc genhtml_legend=1
00:10:30.698  		--rc geninfo_all_blocks=1
00:10:30.698  		--rc geninfo_unexecuted_blocks=1
00:10:30.698  		
00:10:30.698  		'
00:10:30.698    16:20:59 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:30.698  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:30.698  		--rc genhtml_branch_coverage=1
00:10:30.698  		--rc genhtml_function_coverage=1
00:10:30.698  		--rc genhtml_legend=1
00:10:30.698  		--rc geninfo_all_blocks=1
00:10:30.698  		--rc geninfo_unexecuted_blocks=1
00:10:30.698  		
00:10:30.698  		'
00:10:30.698   16:20:59 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:10:31.635  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:10:32.203  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:10:32.204  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:10:32.204  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:10:32.204  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:10:32.463    16:21:01 nvme -- nvme/nvme.sh@79 -- # uname
00:10:32.463   16:21:01 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']'
00:10:32.463   16:21:01 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT
00:10:32.463   16:21:01 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE'
00:10:32.463   16:21:01 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE'
00:10:32.463   16:21:01 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2
00:10:32.463   16:21:01 nvme -- common/autotest_common.sh@1073 -- # echo 0
00:10:32.463   16:21:01 nvme -- common/autotest_common.sh@1075 -- # stubpid=65192
00:10:32.463   16:21:01 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE
00:10:32.463  Waiting for stub to ready for secondary processes...
00:10:32.463   16:21:01 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes...
00:10:32.463   16:21:01 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:10:32.463   16:21:01 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/65192 ]]
00:10:32.463   16:21:01 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:10:32.463  [2024-12-09 16:21:01.498768] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:10:32.463  [2024-12-09 16:21:01.499351] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ]
00:10:33.400   16:21:02 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:10:33.400   16:21:02 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/65192 ]]
00:10:33.400   16:21:02 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:10:33.400  [2024-12-09 16:21:02.559326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:33.660  [2024-12-09 16:21:02.664362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:33.660  [2024-12-09 16:21:02.664526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:33.660  [2024-12-09 16:21:02.664599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:33.660  [2024-12-09 16:21:02.681333] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands
00:10:33.660  [2024-12-09 16:21:02.681372] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:10:33.660  [2024-12-09 16:21:02.696946] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:10:33.660  [2024-12-09 16:21:02.697070] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:10:33.660  [2024-12-09 16:21:02.699802] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:10:33.660  [2024-12-09 16:21:02.700000] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created
00:10:33.660  [2024-12-09 16:21:02.700061] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created
00:10:33.660  [2024-12-09 16:21:02.703461] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:10:33.660  [2024-12-09 16:21:02.703644] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created
00:10:33.660  [2024-12-09 16:21:02.703711] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created
00:10:33.660  [2024-12-09 16:21:02.706611] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:10:33.660  [2024-12-09 16:21:02.706854] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created
00:10:33.660  [2024-12-09 16:21:02.706933] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created
00:10:33.660  [2024-12-09 16:21:02.706983] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created
00:10:33.660  [2024-12-09 16:21:02.707027] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created
00:10:34.598   16:21:03 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:10:34.598  done.
00:10:34.598   16:21:03 nvme -- common/autotest_common.sh@1082 -- # echo done.
00:10:34.598   16:21:03 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:10:34.598   16:21:03 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']'
00:10:34.598   16:21:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:34.598   16:21:03 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:34.598  ************************************
00:10:34.598  START TEST nvme_reset
00:10:34.598  ************************************
00:10:34.598   16:21:03 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:10:34.598  Initializing NVMe Controllers
00:10:34.598  Skipping QEMU NVMe SSD at 0000:00:10.0
00:10:34.598  Skipping QEMU NVMe SSD at 0000:00:11.0
00:10:34.598  Skipping QEMU NVMe SSD at 0000:00:13.0
00:10:34.598  Skipping QEMU NVMe SSD at 0000:00:12.0
00:10:34.598  No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting
00:10:34.598  
00:10:34.598  real	0m0.305s
00:10:34.598  user	0m0.092s
00:10:34.598  sys	0m0.161s
00:10:34.598   16:21:03 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:34.598   16:21:03 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x
00:10:34.598  ************************************
00:10:34.598  END TEST nvme_reset
00:10:34.598  ************************************
00:10:34.857   16:21:03 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:10:34.857   16:21:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:34.857   16:21:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:34.857   16:21:03 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:34.857  ************************************
00:10:34.857  START TEST nvme_identify
00:10:34.857  ************************************
00:10:34.857   16:21:03 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify
00:10:34.857   16:21:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=()
00:10:34.857   16:21:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf
00:10:34.857   16:21:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:10:34.857    16:21:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:10:34.857    16:21:03 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=()
00:10:34.857    16:21:03 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs
00:10:34.857    16:21:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:10:34.857     16:21:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:10:34.857     16:21:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:10:34.857    16:21:03 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:10:34.857    16:21:03 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:10:34.857   16:21:03 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0
00:10:35.119  [2024-12-09 16:21:04.202197] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 65225 terminated unexpected
00:10:35.119  =====================================================
00:10:35.119  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:10:35.119  =====================================================
00:10:35.119  Controller Capabilities/Features
00:10:35.119  ================================
00:10:35.119  Vendor ID:                             1b36
00:10:35.119  Subsystem Vendor ID:                   1af4
00:10:35.120  Serial Number:                         12340
00:10:35.120  Model Number:                          QEMU NVMe Ctrl
00:10:35.120  Firmware Version:                      8.0.0
00:10:35.120  Recommended Arb Burst:                 6
00:10:35.120  IEEE OUI Identifier:                   00 54 52
00:10:35.120  Multi-path I/O
00:10:35.120    May have multiple subsystem ports:   No
00:10:35.120    May have multiple controllers:       No
00:10:35.120    Associated with SR-IOV VF:           No
00:10:35.120  Max Data Transfer Size:                524288
00:10:35.120  Max Number of Namespaces:              256
00:10:35.120  Max Number of I/O Queues:              64
00:10:35.120  NVMe Specification Version (VS):       1.4
00:10:35.120  NVMe Specification Version (Identify): 1.4
00:10:35.120  Maximum Queue Entries:                 2048
00:10:35.120  Contiguous Queues Required:            Yes
00:10:35.120  Arbitration Mechanisms Supported
00:10:35.120    Weighted Round Robin:                Not Supported
00:10:35.120    Vendor Specific:                     Not Supported
00:10:35.120  Reset Timeout:                         7500 ms
00:10:35.120  Doorbell Stride:                       4 bytes
00:10:35.120  NVM Subsystem Reset:                   Not Supported
00:10:35.120  Command Sets Supported
00:10:35.120    NVM Command Set:                     Supported
00:10:35.120  Boot Partition:                        Not Supported
00:10:35.120  Memory Page Size Minimum:              4096 bytes
00:10:35.120  Memory Page Size Maximum:              65536 bytes
00:10:35.120  Persistent Memory Region:              Not Supported
00:10:35.120  Optional Asynchronous Events Supported
00:10:35.120    Namespace Attribute Notices:         Supported
00:10:35.120    Firmware Activation Notices:         Not Supported
00:10:35.120    ANA Change Notices:                  Not Supported
00:10:35.120    PLE Aggregate Log Change Notices:    Not Supported
00:10:35.120    LBA Status Info Alert Notices:       Not Supported
00:10:35.120    EGE Aggregate Log Change Notices:    Not Supported
00:10:35.120    Normal NVM Subsystem Shutdown event: Not Supported
00:10:35.120    Zone Descriptor Change Notices:      Not Supported
00:10:35.120    Discovery Log Change Notices:        Not Supported
00:10:35.120  Controller Attributes
00:10:35.120    128-bit Host Identifier:             Not Supported
00:10:35.120    Non-Operational Permissive Mode:     Not Supported
00:10:35.120    NVM Sets:                            Not Supported
00:10:35.120    Read Recovery Levels:                Not Supported
00:10:35.120    Endurance Groups:                    Not Supported
00:10:35.120    Predictable Latency Mode:            Not Supported
00:10:35.120    Traffic Based Keep ALive:            Not Supported
00:10:35.120    Namespace Granularity:               Not Supported
00:10:35.120    SQ Associations:                     Not Supported
00:10:35.120    UUID List:                           Not Supported
00:10:35.120    Multi-Domain Subsystem:              Not Supported
00:10:35.120    Fixed Capacity Management:           Not Supported
00:10:35.120    Variable Capacity Management:        Not Supported
00:10:35.120    Delete Endurance Group:              Not Supported
00:10:35.120    Delete NVM Set:                      Not Supported
00:10:35.120    Extended LBA Formats Supported:      Supported
00:10:35.120    Flexible Data Placement Supported:   Not Supported
00:10:35.120  
00:10:35.120  Controller Memory Buffer Support
00:10:35.120  ================================
00:10:35.120  Supported:                             No
00:10:35.120  
00:10:35.120  Persistent Memory Region Support
00:10:35.120  ================================
00:10:35.120  Supported:                             No
00:10:35.120  
00:10:35.120  Admin Command Set Attributes
00:10:35.120  ============================
00:10:35.120  Security Send/Receive:                 Not Supported
00:10:35.120  Format NVM:                            Supported
00:10:35.120  Firmware Activate/Download:            Not Supported
00:10:35.120  Namespace Management:                  Supported
00:10:35.120  Device Self-Test:                      Not Supported
00:10:35.120  Directives:                            Supported
00:10:35.120  NVMe-MI:                               Not Supported
00:10:35.120  Virtualization Management:             Not Supported
00:10:35.120  Doorbell Buffer Config:                Supported
00:10:35.120  Get LBA Status Capability:             Not Supported
00:10:35.120  Command & Feature Lockdown Capability: Not Supported
00:10:35.120  Abort Command Limit:                   4
00:10:35.120  Async Event Request Limit:             4
00:10:35.120  Number of Firmware Slots:              N/A
00:10:35.120  Firmware Slot 1 Read-Only:             N/A
00:10:35.120  Firmware Activation Without Reset:     N/A
00:10:35.120  Multiple Update Detection Support:     N/A
00:10:35.120  Firmware Update Granularity:           No Information Provided
00:10:35.120  Per-Namespace SMART Log:               Yes
00:10:35.120  Asymmetric Namespace Access Log Page:  Not Supported
00:10:35.120  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:10:35.120  Command Effects Log Page:              Supported
00:10:35.120  Get Log Page Extended Data:            Supported
00:10:35.120  Telemetry Log Pages:                   Not Supported
00:10:35.120  Persistent Event Log Pages:            Not Supported
00:10:35.120  Supported Log Pages Log Page:          May Support
00:10:35.120  Commands Supported & Effects Log Page: Not Supported
00:10:35.120  Feature Identifiers & Effects Log Page:May Support
00:10:35.120  NVMe-MI Commands & Effects Log Page:   May Support
00:10:35.120  Data Area 4 for Telemetry Log:         Not Supported
00:10:35.120  Error Log Page Entries Supported:      1
00:10:35.120  Keep Alive:                            Not Supported
00:10:35.120  
00:10:35.120  NVM Command Set Attributes
00:10:35.120  ==========================
00:10:35.120  Submission Queue Entry Size
00:10:35.120    Max:                       64
00:10:35.120    Min:                       64
00:10:35.120  Completion Queue Entry Size
00:10:35.120    Max:                       16
00:10:35.120    Min:                       16
00:10:35.120  Number of Namespaces:        256
00:10:35.120  Compare Command:             Supported
00:10:35.120  Write Uncorrectable Command: Not Supported
00:10:35.120  Dataset Management Command:  Supported
00:10:35.120  Write Zeroes Command:        Supported
00:10:35.120  Set Features Save Field:     Supported
00:10:35.120  Reservations:                Not Supported
00:10:35.120  Timestamp:                   Supported
00:10:35.120  Copy:                        Supported
00:10:35.120  Volatile Write Cache:        Present
00:10:35.120  Atomic Write Unit (Normal):  1
00:10:35.120  Atomic Write Unit (PFail):   1
00:10:35.120  Atomic Compare & Write Unit: 1
00:10:35.120  Fused Compare & Write:       Not Supported
00:10:35.120  Scatter-Gather List
00:10:35.120    SGL Command Set:           Supported
00:10:35.120    SGL Keyed:                 Not Supported
00:10:35.120    SGL Bit Bucket Descriptor: Not Supported
00:10:35.120    SGL Metadata Pointer:      Not Supported
00:10:35.120    Oversized SGL:             Not Supported
00:10:35.120    SGL Metadata Address:      Not Supported
00:10:35.120    SGL Offset:                Not Supported
00:10:35.120    Transport SGL Data Block:  Not Supported
00:10:35.120  Replay Protected Memory Block:  Not Supported
00:10:35.120  
00:10:35.120  Firmware Slot Information
00:10:35.120  =========================
00:10:35.120  Active slot:                 1
00:10:35.120  Slot 1 Firmware Revision:    1.0
00:10:35.120  
00:10:35.120  
00:10:35.120  Commands Supported and Effects
00:10:35.120  ==============================
00:10:35.120  Admin Commands
00:10:35.120  --------------
00:10:35.120     Delete I/O Submission Queue (00h): Supported 
00:10:35.120     Create I/O Submission Queue (01h): Supported 
00:10:35.120                    Get Log Page (02h): Supported 
00:10:35.120     Delete I/O Completion Queue (04h): Supported 
00:10:35.120     Create I/O Completion Queue (05h): Supported 
00:10:35.120                        Identify (06h): Supported 
00:10:35.120                           Abort (08h): Supported 
00:10:35.120                    Set Features (09h): Supported 
00:10:35.120                    Get Features (0Ah): Supported 
00:10:35.120      Asynchronous Event Request (0Ch): Supported 
00:10:35.120            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:35.120                  Directive Send (19h): Supported 
00:10:35.120               Directive Receive (1Ah): Supported 
00:10:35.120       Virtualization Management (1Ch): Supported 
00:10:35.120          Doorbell Buffer Config (7Ch): Supported 
00:10:35.120                      Format NVM (80h): Supported LBA-Change 
00:10:35.120  I/O Commands
00:10:35.120  ------------
00:10:35.120                           Flush (00h): Supported LBA-Change 
00:10:35.120                           Write (01h): Supported LBA-Change 
00:10:35.120                            Read (02h): Supported 
00:10:35.120                         Compare (05h): Supported 
00:10:35.120                    Write Zeroes (08h): Supported LBA-Change 
00:10:35.120              Dataset Management (09h): Supported LBA-Change 
00:10:35.120                         Unknown (0Ch): Supported 
00:10:35.120                         Unknown (12h): Supported 
00:10:35.120                            Copy (19h): Supported LBA-Change 
00:10:35.120                         Unknown (1Dh): Supported LBA-Change 
00:10:35.120  
00:10:35.120  Error Log
00:10:35.120  =========
00:10:35.120  
00:10:35.120  Arbitration
00:10:35.120  ===========
00:10:35.120  Arbitration Burst:           no limit
00:10:35.120  
00:10:35.120  Power Management
00:10:35.120  ================
00:10:35.120  Number of Power States:          1
00:10:35.120  Current Power State:             Power State #0
00:10:35.120  Power State #0:
00:10:35.120    Max Power:                     25.00 W
00:10:35.120    Non-Operational State:         Operational
00:10:35.120    Entry Latency:                 16 microseconds
00:10:35.120    Exit Latency:                  4 microseconds
00:10:35.120    Relative Read Throughput:      0
00:10:35.120    Relative Read Latency:         0
00:10:35.120    Relative Write Throughput:     0
00:10:35.120    Relative Write Latency:        0
00:10:35.120    Idle Power[2024-12-09 16:21:04.203562] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 65225 terminated unexpected
00:10:35.120  :                     Not Reported
00:10:35.120    Active Power:                   Not Reported
00:10:35.120  Non-Operational Permissive Mode: Not Supported
00:10:35.120  
00:10:35.120  Health Information
00:10:35.120  ==================
00:10:35.120  Critical Warnings:
00:10:35.120    Available Spare Space:     OK
00:10:35.120    Temperature:               OK
00:10:35.120    Device Reliability:        OK
00:10:35.120    Read Only:                 No
00:10:35.120    Volatile Memory Backup:    OK
00:10:35.121  Current Temperature:         323 Kelvin (50 Celsius)
00:10:35.121  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:35.121  Available Spare:             0%
00:10:35.121  Available Spare Threshold:   0%
00:10:35.121  Life Percentage Used:        0%
00:10:35.121  Data Units Read:             810
00:10:35.121  Data Units Written:          738
00:10:35.121  Host Read Commands:          39410
00:10:35.121  Host Write Commands:         39196
00:10:35.121  Controller Busy Time:        0 minutes
00:10:35.121  Power Cycles:                0
00:10:35.121  Power On Hours:              0 hours
00:10:35.121  Unsafe Shutdowns:            0
00:10:35.121  Unrecoverable Media Errors:  0
00:10:35.121  Lifetime Error Log Entries:  0
00:10:35.121  Warning Temperature Time:    0 minutes
00:10:35.121  Critical Temperature Time:   0 minutes
00:10:35.121  
00:10:35.121  Number of Queues
00:10:35.121  ================
00:10:35.121  Number of I/O Submission Queues:      64
00:10:35.121  Number of I/O Completion Queues:      64
00:10:35.121  
00:10:35.121  ZNS Specific Controller Data
00:10:35.121  ============================
00:10:35.121  Zone Append Size Limit:      0
00:10:35.121  
00:10:35.121  
00:10:35.121  Active Namespaces
00:10:35.121  =================
00:10:35.121  Namespace ID:1
00:10:35.121  Error Recovery Timeout:                Unlimited
00:10:35.121  Command Set Identifier:                NVM (00h)
00:10:35.121  Deallocate:                            Supported
00:10:35.121  Deallocated/Unwritten Error:           Supported
00:10:35.121  Deallocated Read Value:                All 0x00
00:10:35.121  Deallocate in Write Zeroes:            Not Supported
00:10:35.121  Deallocated Guard Field:               0xFFFF
00:10:35.121  Flush:                                 Supported
00:10:35.121  Reservation:                           Not Supported
00:10:35.121  Metadata Transferred as:               Separate Metadata Buffer
00:10:35.121  Namespace Sharing Capabilities:        Private
00:10:35.121  Size (in LBAs):                        1548666 (5GiB)
00:10:35.121  Capacity (in LBAs):                    1548666 (5GiB)
00:10:35.121  Utilization (in LBAs):                 1548666 (5GiB)
00:10:35.121  Thin Provisioning:                     Not Supported
00:10:35.121  Per-NS Atomic Units:                   No
00:10:35.121  Maximum Single Source Range Length:    128
00:10:35.121  Maximum Copy Length:                   128
00:10:35.121  Maximum Source Range Count:            128
00:10:35.121  NGUID/EUI64 Never Reused:              No
00:10:35.121  Namespace Write Protected:             No
00:10:35.121  Number of LBA Formats:                 8
00:10:35.121  Current LBA Format:                    LBA Format #07
00:10:35.121  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:35.121  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:35.121  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:35.121  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:35.121  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:35.121  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:35.121  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:35.121  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:35.121  
00:10:35.121  NVM Specific Namespace Data
00:10:35.121  ===========================
00:10:35.121  Logical Block Storage Tag Mask:               0
00:10:35.121  Protection Information Capabilities:
00:10:35.121    16b Guard Protection Information Storage Tag Support:  No
00:10:35.121    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:35.121    Storage Tag Check Read Support:                        No
00:10:35.121  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.121  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.121  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.121  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.121  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.121  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.121  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.121  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.121  =====================================================
00:10:35.121  NVMe Controller at 0000:00:11.0 [1b36:0010]
00:10:35.121  =====================================================
00:10:35.121  Controller Capabilities/Features
00:10:35.121  ================================
00:10:35.121  Vendor ID:                             1b36
00:10:35.121  Subsystem Vendor ID:                   1af4
00:10:35.121  Serial Number:                         12341
00:10:35.121  Model Number:                          QEMU NVMe Ctrl
00:10:35.121  Firmware Version:                      8.0.0
00:10:35.121  Recommended Arb Burst:                 6
00:10:35.121  IEEE OUI Identifier:                   00 54 52
00:10:35.121  Multi-path I/O
00:10:35.121    May have multiple subsystem ports:   No
00:10:35.121    May have multiple controllers:       No
00:10:35.121    Associated with SR-IOV VF:           No
00:10:35.121  Max Data Transfer Size:                524288
00:10:35.121  Max Number of Namespaces:              256
00:10:35.121  Max Number of I/O Queues:              64
00:10:35.121  NVMe Specification Version (VS):       1.4
00:10:35.121  NVMe Specification Version (Identify): 1.4
00:10:35.121  Maximum Queue Entries:                 2048
00:10:35.121  Contiguous Queues Required:            Yes
00:10:35.121  Arbitration Mechanisms Supported
00:10:35.121    Weighted Round Robin:                Not Supported
00:10:35.121    Vendor Specific:                     Not Supported
00:10:35.121  Reset Timeout:                         7500 ms
00:10:35.121  Doorbell Stride:                       4 bytes
00:10:35.121  NVM Subsystem Reset:                   Not Supported
00:10:35.121  Command Sets Supported
00:10:35.121    NVM Command Set:                     Supported
00:10:35.121  Boot Partition:                        Not Supported
00:10:35.121  Memory Page Size Minimum:              4096 bytes
00:10:35.121  Memory Page Size Maximum:              65536 bytes
00:10:35.121  Persistent Memory Region:              Not Supported
00:10:35.121  Optional Asynchronous Events Supported
00:10:35.121    Namespace Attribute Notices:         Supported
00:10:35.121    Firmware Activation Notices:         Not Supported
00:10:35.121    ANA Change Notices:                  Not Supported
00:10:35.121    PLE Aggregate Log Change Notices:    Not Supported
00:10:35.121    LBA Status Info Alert Notices:       Not Supported
00:10:35.121    EGE Aggregate Log Change Notices:    Not Supported
00:10:35.121    Normal NVM Subsystem Shutdown event: Not Supported
00:10:35.121    Zone Descriptor Change Notices:      Not Supported
00:10:35.121    Discovery Log Change Notices:        Not Supported
00:10:35.121  Controller Attributes
00:10:35.121    128-bit Host Identifier:             Not Supported
00:10:35.121    Non-Operational Permissive Mode:     Not Supported
00:10:35.121    NVM Sets:                            Not Supported
00:10:35.121    Read Recovery Levels:                Not Supported
00:10:35.121    Endurance Groups:                    Not Supported
00:10:35.121    Predictable Latency Mode:            Not Supported
00:10:35.121    Traffic Based Keep ALive:            Not Supported
00:10:35.121    Namespace Granularity:               Not Supported
00:10:35.121    SQ Associations:                     Not Supported
00:10:35.121    UUID List:                           Not Supported
00:10:35.121    Multi-Domain Subsystem:              Not Supported
00:10:35.121    Fixed Capacity Management:           Not Supported
00:10:35.121    Variable Capacity Management:        Not Supported
00:10:35.121    Delete Endurance Group:              Not Supported
00:10:35.121    Delete NVM Set:                      Not Supported
00:10:35.121    Extended LBA Formats Supported:      Supported
00:10:35.121    Flexible Data Placement Supported:   Not Supported
00:10:35.121  
00:10:35.121  Controller Memory Buffer Support
00:10:35.121  ================================
00:10:35.121  Supported:                             No
00:10:35.121  
00:10:35.121  Persistent Memory Region Support
00:10:35.121  ================================
00:10:35.121  Supported:                             No
00:10:35.121  
00:10:35.121  Admin Command Set Attributes
00:10:35.121  ============================
00:10:35.121  Security Send/Receive:                 Not Supported
00:10:35.121  Format NVM:                            Supported
00:10:35.121  Firmware Activate/Download:            Not Supported
00:10:35.121  Namespace Management:                  Supported
00:10:35.121  Device Self-Test:                      Not Supported
00:10:35.121  Directives:                            Supported
00:10:35.121  NVMe-MI:                               Not Supported
00:10:35.121  Virtualization Management:             Not Supported
00:10:35.121  Doorbell Buffer Config:                Supported
00:10:35.121  Get LBA Status Capability:             Not Supported
00:10:35.121  Command & Feature Lockdown Capability: Not Supported
00:10:35.121  Abort Command Limit:                   4
00:10:35.121  Async Event Request Limit:             4
00:10:35.121  Number of Firmware Slots:              N/A
00:10:35.121  Firmware Slot 1 Read-Only:             N/A
00:10:35.121  Firmware Activation Without Reset:     N/A
00:10:35.121  Multiple Update Detection Support:     N/A
00:10:35.121  Firmware Update Granularity:           No Information Provided
00:10:35.121  Per-Namespace SMART Log:               Yes
00:10:35.121  Asymmetric Namespace Access Log Page:  Not Supported
00:10:35.121  Subsystem NQN:                         nqn.2019-08.org.qemu:12341
00:10:35.121  Command Effects Log Page:              Supported
00:10:35.121  Get Log Page Extended Data:            Supported
00:10:35.121  Telemetry Log Pages:                   Not Supported
00:10:35.121  Persistent Event Log Pages:            Not Supported
00:10:35.121  Supported Log Pages Log Page:          May Support
00:10:35.121  Commands Supported & Effects Log Page: Not Supported
00:10:35.121  Feature Identifiers & Effects Log Page:May Support
00:10:35.121  NVMe-MI Commands & Effects Log Page:   May Support
00:10:35.121  Data Area 4 for Telemetry Log:         Not Supported
00:10:35.121  Error Log Page Entries Supported:      1
00:10:35.121  Keep Alive:                            Not Supported
00:10:35.121  
00:10:35.121  NVM Command Set Attributes
00:10:35.121  ==========================
00:10:35.121  Submission Queue Entry Size
00:10:35.121    Max:                       64
00:10:35.121    Min:                       64
00:10:35.121  Completion Queue Entry Size
00:10:35.121    Max:                       16
00:10:35.121    Min:                       16
00:10:35.121  Number of Namespaces:        256
00:10:35.121  Compare Command:             Supported
00:10:35.121  Write Uncorrectable Command: Not Supported
00:10:35.121  Dataset Management Command:  Supported
00:10:35.121  Write Zeroes Command:        Supported
00:10:35.121  Set Features Save Field:     Supported
00:10:35.122  Reservations:                Not Supported
00:10:35.122  Timestamp:                   Supported
00:10:35.122  Copy:                        Supported
00:10:35.122  Volatile Write Cache:        Present
00:10:35.122  Atomic Write Unit (Normal):  1
00:10:35.122  Atomic Write Unit (PFail):   1
00:10:35.122  Atomic Compare & Write Unit: 1
00:10:35.122  Fused Compare & Write:       Not Supported
00:10:35.122  Scatter-Gather List
00:10:35.122    SGL Command Set:           Supported
00:10:35.122    SGL Keyed:                 Not Supported
00:10:35.122    SGL Bit Bucket Descriptor: Not Supported
00:10:35.122    SGL Metadata Pointer:      Not Supported
00:10:35.122    Oversized SGL:             Not Supported
00:10:35.122    SGL Metadata Address:      Not Supported
00:10:35.122    SGL Offset:                Not Supported
00:10:35.122    Transport SGL Data Block:  Not Supported
00:10:35.122  Replay Protected Memory Block:  Not Supported
00:10:35.122  
00:10:35.122  Firmware Slot Information
00:10:35.122  =========================
00:10:35.122  Active slot:                 1
00:10:35.122  Slot 1 Firmware Revision:    1.0
00:10:35.122  
00:10:35.122  
00:10:35.122  Commands Supported and Effects
00:10:35.122  ==============================
00:10:35.122  Admin Commands
00:10:35.122  --------------
00:10:35.122     Delete I/O Submission Queue (00h): Supported 
00:10:35.122     Create I/O Submission Queue (01h): Supported 
00:10:35.122                    Get Log Page (02h): Supported 
00:10:35.122     Delete I/O Completion Queue (04h): Supported 
00:10:35.122     Create I/O Completion Queue (05h): Supported 
00:10:35.122                        Identify (06h): Supported 
00:10:35.122                           Abort (08h): Supported 
00:10:35.122                    Set Features (09h): Supported 
00:10:35.122                    Get Features (0Ah): Supported 
00:10:35.122      Asynchronous Event Request (0Ch): Supported 
00:10:35.122            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:35.122                  Directive Send (19h): Supported 
00:10:35.122               Directive Receive (1Ah): Supported 
00:10:35.122       Virtualization Management (1Ch): Supported 
00:10:35.122          Doorbell Buffer Config (7Ch): Supported 
00:10:35.122                      Format NVM (80h): Supported LBA-Change 
00:10:35.122  I/O Commands
00:10:35.122  ------------
00:10:35.122                           Flush (00h): Supported LBA-Change 
00:10:35.122                           Write (01h): Supported LBA-Change 
00:10:35.122                            Read (02h): Supported 
00:10:35.122                         Compare (05h): Supported 
00:10:35.122                    Write Zeroes (08h): Supported LBA-Change 
00:10:35.122              Dataset Management (09h): Supported LBA-Change 
00:10:35.122                         Unknown (0Ch): Supported 
00:10:35.122                         Unknown (12h): Supported 
00:10:35.122                            Copy (19h): Supported LBA-Change 
00:10:35.122                         Unknown (1Dh): Supported LBA-Change 
00:10:35.122  
00:10:35.122  Error Log
00:10:35.122  =========
00:10:35.122  
00:10:35.122  Arbitration
00:10:35.122  ===========
00:10:35.122  Arbitration Burst:           no limit
00:10:35.122  
00:10:35.122  Power Management
00:10:35.122  ================
00:10:35.122  Number of Power States:          1
00:10:35.122  Current Power State:             Power State #0
00:10:35.122  Power State #0:
00:10:35.122    Max Power:                     25.00 W
00:10:35.122    Non-Operational State:         Operational
00:10:35.122    Entry Latency:                 16 microseconds
00:10:35.122    Exit Latency:                  4 microseconds
00:10:35.122    Relative Read Throughput:      0
00:10:35.122    Relative Read Latency:         0
00:10:35.122    Relative Write Throughput:     0
00:10:35.122    Relative Write Latency:        0
00:10:35.122    Idle Power:                     Not Reported
00:10:35.122    Active Power:                   Not Reported
00:10:35.122  Non-Operational Permissive Mode: Not Supported
00:10:35.122  
00:10:35.122  Health Information
00:10:35.122  ==================
00:10:35.122  Critical Warnings:
00:10:35.122    Available Spare Space:     OK
00:10:35.122    Temperature:      [2024-12-09 16:21:04.204375] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 65225 terminated unexpected
00:10:35.122           OK
00:10:35.122    Device Reliability:        OK
00:10:35.122    Read Only:                 No
00:10:35.122    Volatile Memory Backup:    OK
00:10:35.122  Current Temperature:         323 Kelvin (50 Celsius)
00:10:35.122  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:35.122  Available Spare:             0%
00:10:35.122  Available Spare Threshold:   0%
00:10:35.122  Life Percentage Used:        0%
00:10:35.122  Data Units Read:             1251
00:10:35.122  Data Units Written:          1118
00:10:35.122  Host Read Commands:          58490
00:10:35.122  Host Write Commands:         57283
00:10:35.122  Controller Busy Time:        0 minutes
00:10:35.122  Power Cycles:                0
00:10:35.122  Power On Hours:              0 hours
00:10:35.122  Unsafe Shutdowns:            0
00:10:35.122  Unrecoverable Media Errors:  0
00:10:35.122  Lifetime Error Log Entries:  0
00:10:35.122  Warning Temperature Time:    0 minutes
00:10:35.122  Critical Temperature Time:   0 minutes
00:10:35.122  
00:10:35.122  Number of Queues
00:10:35.122  ================
00:10:35.122  Number of I/O Submission Queues:      64
00:10:35.122  Number of I/O Completion Queues:      64
00:10:35.122  
00:10:35.122  ZNS Specific Controller Data
00:10:35.122  ============================
00:10:35.122  Zone Append Size Limit:      0
00:10:35.122  
00:10:35.122  
00:10:35.122  Active Namespaces
00:10:35.122  =================
00:10:35.122  Namespace ID:1
00:10:35.122  Error Recovery Timeout:                Unlimited
00:10:35.122  Command Set Identifier:                NVM (00h)
00:10:35.122  Deallocate:                            Supported
00:10:35.122  Deallocated/Unwritten Error:           Supported
00:10:35.122  Deallocated Read Value:                All 0x00
00:10:35.122  Deallocate in Write Zeroes:            Not Supported
00:10:35.122  Deallocated Guard Field:               0xFFFF
00:10:35.122  Flush:                                 Supported
00:10:35.122  Reservation:                           Not Supported
00:10:35.122  Namespace Sharing Capabilities:        Private
00:10:35.122  Size (in LBAs):                        1310720 (5GiB)
00:10:35.122  Capacity (in LBAs):                    1310720 (5GiB)
00:10:35.122  Utilization (in LBAs):                 1310720 (5GiB)
00:10:35.122  Thin Provisioning:                     Not Supported
00:10:35.122  Per-NS Atomic Units:                   No
00:10:35.122  Maximum Single Source Range Length:    128
00:10:35.122  Maximum Copy Length:                   128
00:10:35.122  Maximum Source Range Count:            128
00:10:35.122  NGUID/EUI64 Never Reused:              No
00:10:35.122  Namespace Write Protected:             No
00:10:35.122  Number of LBA Formats:                 8
00:10:35.122  Current LBA Format:                    LBA Format #04
00:10:35.122  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:35.122  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:35.122  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:35.122  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:35.122  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:35.122  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:35.122  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:35.122  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:35.122  
00:10:35.122  NVM Specific Namespace Data
00:10:35.122  ===========================
00:10:35.122  Logical Block Storage Tag Mask:               0
00:10:35.122  Protection Information Capabilities:
00:10:35.122    16b Guard Protection Information Storage Tag Support:  No
00:10:35.122    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:35.122    Storage Tag Check Read Support:                        No
00:10:35.122  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.122  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.122  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.122  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.122  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.122  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.122  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.122  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.122  =====================================================
00:10:35.122  NVMe Controller at 0000:00:13.0 [1b36:0010]
00:10:35.122  =====================================================
00:10:35.122  Controller Capabilities/Features
00:10:35.122  ================================
00:10:35.122  Vendor ID:                             1b36
00:10:35.122  Subsystem Vendor ID:                   1af4
00:10:35.122  Serial Number:                         12343
00:10:35.122  Model Number:                          QEMU NVMe Ctrl
00:10:35.122  Firmware Version:                      8.0.0
00:10:35.122  Recommended Arb Burst:                 6
00:10:35.122  IEEE OUI Identifier:                   00 54 52
00:10:35.122  Multi-path I/O
00:10:35.122    May have multiple subsystem ports:   No
00:10:35.122    May have multiple controllers:       Yes
00:10:35.122    Associated with SR-IOV VF:           No
00:10:35.122  Max Data Transfer Size:                524288
00:10:35.122  Max Number of Namespaces:              256
00:10:35.122  Max Number of I/O Queues:              64
00:10:35.122  NVMe Specification Version (VS):       1.4
00:10:35.122  NVMe Specification Version (Identify): 1.4
00:10:35.122  Maximum Queue Entries:                 2048
00:10:35.122  Contiguous Queues Required:            Yes
00:10:35.122  Arbitration Mechanisms Supported
00:10:35.122    Weighted Round Robin:                Not Supported
00:10:35.122    Vendor Specific:                     Not Supported
00:10:35.122  Reset Timeout:                         7500 ms
00:10:35.122  Doorbell Stride:                       4 bytes
00:10:35.122  NVM Subsystem Reset:                   Not Supported
00:10:35.122  Command Sets Supported
00:10:35.122    NVM Command Set:                     Supported
00:10:35.122  Boot Partition:                        Not Supported
00:10:35.122  Memory Page Size Minimum:              4096 bytes
00:10:35.122  Memory Page Size Maximum:              65536 bytes
00:10:35.122  Persistent Memory Region:              Not Supported
00:10:35.122  Optional Asynchronous Events Supported
00:10:35.122    Namespace Attribute Notices:         Supported
00:10:35.123    Firmware Activation Notices:         Not Supported
00:10:35.123    ANA Change Notices:                  Not Supported
00:10:35.123    PLE Aggregate Log Change Notices:    Not Supported
00:10:35.123    LBA Status Info Alert Notices:       Not Supported
00:10:35.123    EGE Aggregate Log Change Notices:    Not Supported
00:10:35.123    Normal NVM Subsystem Shutdown event: Not Supported
00:10:35.123    Zone Descriptor Change Notices:      Not Supported
00:10:35.123    Discovery Log Change Notices:        Not Supported
00:10:35.123  Controller Attributes
00:10:35.123    128-bit Host Identifier:             Not Supported
00:10:35.123    Non-Operational Permissive Mode:     Not Supported
00:10:35.123    NVM Sets:                            Not Supported
00:10:35.123    Read Recovery Levels:                Not Supported
00:10:35.123    Endurance Groups:                    Supported
00:10:35.123    Predictable Latency Mode:            Not Supported
00:10:35.123    Traffic Based Keep ALive:            Not Supported
00:10:35.123    Namespace Granularity:               Not Supported
00:10:35.123    SQ Associations:                     Not Supported
00:10:35.123    UUID List:                           Not Supported
00:10:35.123    Multi-Domain Subsystem:              Not Supported
00:10:35.123    Fixed Capacity Management:           Not Supported
00:10:35.123    Variable Capacity Management:        Not Supported
00:10:35.123    Delete Endurance Group:              Not Supported
00:10:35.123    Delete NVM Set:                      Not Supported
00:10:35.123    Extended LBA Formats Supported:      Supported
00:10:35.123    Flexible Data Placement Supported:   Supported
00:10:35.123  
00:10:35.123  Controller Memory Buffer Support
00:10:35.123  ================================
00:10:35.123  Supported:                             No
00:10:35.123  
00:10:35.123  Persistent Memory Region Support
00:10:35.123  ================================
00:10:35.123  Supported:                             No
00:10:35.123  
00:10:35.123  Admin Command Set Attributes
00:10:35.123  ============================
00:10:35.123  Security Send/Receive:                 Not Supported
00:10:35.123  Format NVM:                            Supported
00:10:35.123  Firmware Activate/Download:            Not Supported
00:10:35.123  Namespace Management:                  Supported
00:10:35.123  Device Self-Test:                      Not Supported
00:10:35.123  Directives:                            Supported
00:10:35.123  NVMe-MI:                               Not Supported
00:10:35.123  Virtualization Management:             Not Supported
00:10:35.123  Doorbell Buffer Config:                Supported
00:10:35.123  Get LBA Status Capability:             Not Supported
00:10:35.123  Command & Feature Lockdown Capability: Not Supported
00:10:35.123  Abort Command Limit:                   4
00:10:35.123  Async Event Request Limit:             4
00:10:35.123  Number of Firmware Slots:              N/A
00:10:35.123  Firmware Slot 1 Read-Only:             N/A
00:10:35.123  Firmware Activation Without Reset:     N/A
00:10:35.123  Multiple Update Detection Support:     N/A
00:10:35.123  Firmware Update Granularity:           No Information Provided
00:10:35.123  Per-Namespace SMART Log:               Yes
00:10:35.123  Asymmetric Namespace Access Log Page:  Not Supported
00:10:35.123  Subsystem NQN:                         nqn.2019-08.org.qemu:fdp-subsys3
00:10:35.123  Command Effects Log Page:              Supported
00:10:35.123  Get Log Page Extended Data:            Supported
00:10:35.123  Telemetry Log Pages:                   Not Supported
00:10:35.123  Persistent Event Log Pages:            Not Supported
00:10:35.123  Supported Log Pages Log Page:          May Support
00:10:35.123  Commands Supported & Effects Log Page: Not Supported
00:10:35.123  Feature Identifiers & Effects Log Page:May Support
00:10:35.123  NVMe-MI Commands & Effects Log Page:   May Support
00:10:35.123  Data Area 4 for Telemetry Log:         Not Supported
00:10:35.123  Error Log Page Entries Supported:      1
00:10:35.123  Keep Alive:                            Not Supported
00:10:35.123  
00:10:35.123  NVM Command Set Attributes
00:10:35.123  ==========================
00:10:35.123  Submission Queue Entry Size
00:10:35.123    Max:                       64
00:10:35.123    Min:                       64
00:10:35.123  Completion Queue Entry Size
00:10:35.123    Max:                       16
00:10:35.123    Min:                       16
00:10:35.123  Number of Namespaces:        256
00:10:35.123  Compare Command:             Supported
00:10:35.123  Write Uncorrectable Command: Not Supported
00:10:35.123  Dataset Management Command:  Supported
00:10:35.123  Write Zeroes Command:        Supported
00:10:35.123  Set Features Save Field:     Supported
00:10:35.123  Reservations:                Not Supported
00:10:35.123  Timestamp:                   Supported
00:10:35.123  Copy:                        Supported
00:10:35.123  Volatile Write Cache:        Present
00:10:35.123  Atomic Write Unit (Normal):  1
00:10:35.123  Atomic Write Unit (PFail):   1
00:10:35.123  Atomic Compare & Write Unit: 1
00:10:35.123  Fused Compare & Write:       Not Supported
00:10:35.123  Scatter-Gather List
00:10:35.123    SGL Command Set:           Supported
00:10:35.123    SGL Keyed:                 Not Supported
00:10:35.123    SGL Bit Bucket Descriptor: Not Supported
00:10:35.123    SGL Metadata Pointer:      Not Supported
00:10:35.123    Oversized SGL:             Not Supported
00:10:35.123    SGL Metadata Address:      Not Supported
00:10:35.123    SGL Offset:                Not Supported
00:10:35.123    Transport SGL Data Block:  Not Supported
00:10:35.123  Replay Protected Memory Block:  Not Supported
00:10:35.123  
00:10:35.123  Firmware Slot Information
00:10:35.123  =========================
00:10:35.123  Active slot:                 1
00:10:35.123  Slot 1 Firmware Revision:    1.0
00:10:35.123  
00:10:35.123  
00:10:35.123  Commands Supported and Effects
00:10:35.123  ==============================
00:10:35.123  Admin Commands
00:10:35.123  --------------
00:10:35.123     Delete I/O Submission Queue (00h): Supported 
00:10:35.123     Create I/O Submission Queue (01h): Supported 
00:10:35.123                    Get Log Page (02h): Supported 
00:10:35.123     Delete I/O Completion Queue (04h): Supported 
00:10:35.123     Create I/O Completion Queue (05h): Supported 
00:10:35.123                        Identify (06h): Supported 
00:10:35.123                           Abort (08h): Supported 
00:10:35.123                    Set Features (09h): Supported 
00:10:35.123                    Get Features (0Ah): Supported 
00:10:35.123      Asynchronous Event Request (0Ch): Supported 
00:10:35.123            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:35.123                  Directive Send (19h): Supported 
00:10:35.123               Directive Receive (1Ah): Supported 
00:10:35.123       Virtualization Management (1Ch): Supported 
00:10:35.123          Doorbell Buffer Config (7Ch): Supported 
00:10:35.123                      Format NVM (80h): Supported LBA-Change 
00:10:35.123  I/O Commands
00:10:35.123  ------------
00:10:35.123                           Flush (00h): Supported LBA-Change 
00:10:35.123                           Write (01h): Supported LBA-Change 
00:10:35.123                            Read (02h): Supported 
00:10:35.123                         Compare (05h): Supported 
00:10:35.123                    Write Zeroes (08h): Supported LBA-Change 
00:10:35.123              Dataset Management (09h): Supported LBA-Change 
00:10:35.123                         Unknown (0Ch): Supported 
00:10:35.123                         Unknown (12h): Supported 
00:10:35.123                            Copy (19h): Supported LBA-Change 
00:10:35.123                         Unknown (1Dh): Supported LBA-Change 
00:10:35.123  
00:10:35.123  Error Log
00:10:35.123  =========
00:10:35.123  
00:10:35.123  Arbitration
00:10:35.123  ===========
00:10:35.123  Arbitration Burst:           no limit
00:10:35.123  
00:10:35.123  Power Management
00:10:35.123  ================
00:10:35.123  Number of Power States:          1
00:10:35.123  Current Power State:             Power State #0
00:10:35.123  Power State #0:
00:10:35.123    Max Power:                     25.00 W
00:10:35.123    Non-Operational State:         Operational
00:10:35.123    Entry Latency:                 16 microseconds
00:10:35.123    Exit Latency:                  4 microseconds
00:10:35.123    Relative Read Throughput:      0
00:10:35.123    Relative Read Latency:         0
00:10:35.123    Relative Write Throughput:     0
00:10:35.123    Relative Write Latency:        0
00:10:35.123    Idle Power:                     Not Reported
00:10:35.123    Active Power:                   Not Reported
00:10:35.123  Non-Operational Permissive Mode: Not Supported
00:10:35.123  
00:10:35.123  Health Information
00:10:35.123  ==================
00:10:35.123  Critical Warnings:
00:10:35.123    Available Spare Space:     OK
00:10:35.123    Temperature:               OK
00:10:35.123    Device Reliability:        OK
00:10:35.123    Read Only:                 No
00:10:35.123    Volatile Memory Backup:    OK
00:10:35.123  Current Temperature:         323 Kelvin (50 Celsius)
00:10:35.123  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:35.123  Available Spare:             0%
00:10:35.123  Available Spare Threshold:   0%
00:10:35.123  Life Percentage Used:        0%
00:10:35.123  Data Units Read:             918
00:10:35.123  Data Units Written:          847
00:10:35.123  Host Read Commands:          40790
00:10:35.123  Host Write Commands:         40213
00:10:35.123  Controller Busy Time:        0 minutes
00:10:35.123  Power Cycles:                0
00:10:35.123  Power On Hours:              0 hours
00:10:35.123  Unsafe Shutdowns:            0
00:10:35.123  Unrecoverable Media Errors:  0
00:10:35.123  Lifetime Error Log Entries:  0
00:10:35.123  Warning Temperature Time:    0 minutes
00:10:35.123  Critical Temperature Time:   0 minutes
00:10:35.123  
00:10:35.123  Number of Queues
00:10:35.123  ================
00:10:35.123  Number of I/O Submission Queues:      64
00:10:35.123  Number of I/O Completion Queues:      64
00:10:35.123  
00:10:35.123  ZNS Specific Controller Data
00:10:35.123  ============================
00:10:35.123  Zone Append Size Limit:      0
00:10:35.123  
00:10:35.123  
00:10:35.123  Active Namespaces
00:10:35.123  =================
00:10:35.123  Namespace ID:1
00:10:35.123  Error Recovery Timeout:                Unlimited
00:10:35.123  Command Set Identifier:                NVM (00h)
00:10:35.123  Deallocate:                            Supported
00:10:35.123  Deallocated/Unwritten Error:           Supported
00:10:35.123  Deallocated Read Value:                All 0x00
00:10:35.123  Deallocate in Write Zeroes:            Not Supported
00:10:35.123  Deallocated Guard Field:               0xFFFF
00:10:35.124  Flush:                                 Supported
00:10:35.124  Reservation:                           Not Supported
00:10:35.124  Namespace Sharing Capabilities:        Multiple Controllers
00:10:35.124  Size (in LBAs):                        262144 (1GiB)
00:10:35.124  Capacity (in LBAs):                    262144 (1GiB)
00:10:35.124  Utilization (in LBAs):                 262144 (1GiB)
00:10:35.124  Thin Provisioning:                     Not Supported
00:10:35.124  Per-NS Atomic Units:                   No
00:10:35.124  Maximum Single Source Range Length:    128
00:10:35.124  Maximum Copy Length:                   128
00:10:35.124  Maximum Source Range Count:            128
00:10:35.124  NGUID/EUI64 Never Reused:              No
00:10:35.124  Namespace Write Protected:             No
00:10:35.124  Endurance group ID:                    1
00:10:35.124  Number of LBA Formats:                 8
00:10:35.124  Current LBA Format:                    LBA Format #04
00:10:35.124  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:35.124  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:35.124  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:35.124  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:35.124  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:35.124  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:35.124  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:35.124  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:35.124  
00:10:35.124  Get Feature FDP:
00:10:35.124  ================
00:10:35.124    Enabled:                 Yes
00:10:35.124    FDP configuration index: 0
00:10:35.124  
00:10:35.124  FDP configurations log page
00:10:35.124  ===========================
00:10:35.124  Number of FDP configurations:         1
00:10:35.124  Version:                              0
00:10:35.124  Size:                                 112
00:10:35.124  FDP Configuration Descriptor:         0
00:10:35.124    Descriptor Size:                    96
00:10:35.124    Reclaim Group Identifier format:    2
00:10:35.124    FDP Volatile Write Cache:           Not Present
00:10:35.124    FDP Configuration:                  Valid
00:10:35.124    Vendor Specific Size:               0
00:10:35.124    Number of Reclaim Groups:           2
00:10:35.124    Number of Recalim Unit Handles:     8
00:10:35.124    Max Placement Identifiers:          128
00:10:35.124    Number of Namespaces Suppprted:     256
00:10:35.124    Reclaim unit Nominal Size:          6000000 bytes
00:10:35.124    Estimated Reclaim Unit Time Limit:  Not Reported
00:10:35.124      RUH Desc #000:          RUH Type: Initially Isolated
00:10:35.124      RUH Desc #001:          RUH Type: Initially Isolated
00:10:35.124      RUH Desc #002:          RUH Type: Initially Isolated
00:10:35.124      RUH Desc #003:          RUH Type: Initially Isolated
00:10:35.124      RUH Desc #004:          RUH Type: Initially Isolated
00:10:35.124      RUH Desc #005:          RUH Type: Initially Isolated
00:10:35.124      RUH Desc #006:          RUH Type: Initially Isolated
00:10:35.124      RUH Desc #007:          RUH Type: Initially Isolated
00:10:35.124  
00:10:35.124  FDP reclaim unit handle usage log page
00:10:35.124  ======================================
00:10:35.124  Number of Reclaim Unit Handles:       8
00:10:35.124    RUH Usage Desc #000:   RUH Attributes: Controller Specified
00:10:35.124    RUH Usage Desc #001:   RUH Attributes: Unused
00:10:35.124    RUH Usage Desc #002:   RUH Attributes: Unused
00:10:35.124    RUH Usage Desc #003:   RUH Attributes: Unused
00:10:35.124    RUH Usage Desc #004:   RUH Attributes: Unused
00:10:35.124    RUH Usage Desc #005:   RUH Attributes: Unused
00:10:35.124    RUH Usage Desc #006:   RUH Attributes: Unused
00:10:35.124    RUH Usage Desc #007:   RUH Attributes: Unused
00:10:35.124  
00:10:35.124  FDP statistics log page
00:10:35.124  =======================
00:10:35.124  Host bytes with metadata written:  541761536
00:10:35.124  Med[2024-12-09 16:21:04.206072] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 65225 terminated unexpected
00:10:35.124  ia bytes with metadata written: 541818880
00:10:35.124  Media bytes erased:                0
00:10:35.124  
00:10:35.124  FDP events log page
00:10:35.124  ===================
00:10:35.124  Number of FDP events:              0
00:10:35.124  
00:10:35.124  NVM Specific Namespace Data
00:10:35.124  ===========================
00:10:35.124  Logical Block Storage Tag Mask:               0
00:10:35.124  Protection Information Capabilities:
00:10:35.124    16b Guard Protection Information Storage Tag Support:  No
00:10:35.124    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:35.124    Storage Tag Check Read Support:                        No
00:10:35.124  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.124  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.124  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.124  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.124  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.124  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.124  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.124  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.124  =====================================================
00:10:35.124  NVMe Controller at 0000:00:12.0 [1b36:0010]
00:10:35.124  =====================================================
00:10:35.124  Controller Capabilities/Features
00:10:35.124  ================================
00:10:35.124  Vendor ID:                             1b36
00:10:35.124  Subsystem Vendor ID:                   1af4
00:10:35.124  Serial Number:                         12342
00:10:35.124  Model Number:                          QEMU NVMe Ctrl
00:10:35.124  Firmware Version:                      8.0.0
00:10:35.124  Recommended Arb Burst:                 6
00:10:35.124  IEEE OUI Identifier:                   00 54 52
00:10:35.124  Multi-path I/O
00:10:35.124    May have multiple subsystem ports:   No
00:10:35.124    May have multiple controllers:       No
00:10:35.124    Associated with SR-IOV VF:           No
00:10:35.124  Max Data Transfer Size:                524288
00:10:35.124  Max Number of Namespaces:              256
00:10:35.124  Max Number of I/O Queues:              64
00:10:35.124  NVMe Specification Version (VS):       1.4
00:10:35.124  NVMe Specification Version (Identify): 1.4
00:10:35.124  Maximum Queue Entries:                 2048
00:10:35.124  Contiguous Queues Required:            Yes
00:10:35.124  Arbitration Mechanisms Supported
00:10:35.124    Weighted Round Robin:                Not Supported
00:10:35.124    Vendor Specific:                     Not Supported
00:10:35.124  Reset Timeout:                         7500 ms
00:10:35.124  Doorbell Stride:                       4 bytes
00:10:35.124  NVM Subsystem Reset:                   Not Supported
00:10:35.124  Command Sets Supported
00:10:35.124    NVM Command Set:                     Supported
00:10:35.124  Boot Partition:                        Not Supported
00:10:35.124  Memory Page Size Minimum:              4096 bytes
00:10:35.124  Memory Page Size Maximum:              65536 bytes
00:10:35.124  Persistent Memory Region:              Not Supported
00:10:35.124  Optional Asynchronous Events Supported
00:10:35.124    Namespace Attribute Notices:         Supported
00:10:35.124    Firmware Activation Notices:         Not Supported
00:10:35.124    ANA Change Notices:                  Not Supported
00:10:35.124    PLE Aggregate Log Change Notices:    Not Supported
00:10:35.124    LBA Status Info Alert Notices:       Not Supported
00:10:35.124    EGE Aggregate Log Change Notices:    Not Supported
00:10:35.124    Normal NVM Subsystem Shutdown event: Not Supported
00:10:35.124    Zone Descriptor Change Notices:      Not Supported
00:10:35.124    Discovery Log Change Notices:        Not Supported
00:10:35.124  Controller Attributes
00:10:35.124    128-bit Host Identifier:             Not Supported
00:10:35.124    Non-Operational Permissive Mode:     Not Supported
00:10:35.124    NVM Sets:                            Not Supported
00:10:35.124    Read Recovery Levels:                Not Supported
00:10:35.124    Endurance Groups:                    Not Supported
00:10:35.124    Predictable Latency Mode:            Not Supported
00:10:35.124    Traffic Based Keep ALive:            Not Supported
00:10:35.124    Namespace Granularity:               Not Supported
00:10:35.124    SQ Associations:                     Not Supported
00:10:35.124    UUID List:                           Not Supported
00:10:35.124    Multi-Domain Subsystem:              Not Supported
00:10:35.124    Fixed Capacity Management:           Not Supported
00:10:35.124    Variable Capacity Management:        Not Supported
00:10:35.124    Delete Endurance Group:              Not Supported
00:10:35.124    Delete NVM Set:                      Not Supported
00:10:35.124    Extended LBA Formats Supported:      Supported
00:10:35.124    Flexible Data Placement Supported:   Not Supported
00:10:35.124  
00:10:35.124  Controller Memory Buffer Support
00:10:35.124  ================================
00:10:35.124  Supported:                             No
00:10:35.125  
00:10:35.125  Persistent Memory Region Support
00:10:35.125  ================================
00:10:35.125  Supported:                             No
00:10:35.125  
00:10:35.125  Admin Command Set Attributes
00:10:35.125  ============================
00:10:35.125  Security Send/Receive:                 Not Supported
00:10:35.125  Format NVM:                            Supported
00:10:35.125  Firmware Activate/Download:            Not Supported
00:10:35.125  Namespace Management:                  Supported
00:10:35.125  Device Self-Test:                      Not Supported
00:10:35.125  Directives:                            Supported
00:10:35.125  NVMe-MI:                               Not Supported
00:10:35.125  Virtualization Management:             Not Supported
00:10:35.125  Doorbell Buffer Config:                Supported
00:10:35.125  Get LBA Status Capability:             Not Supported
00:10:35.125  Command & Feature Lockdown Capability: Not Supported
00:10:35.125  Abort Command Limit:                   4
00:10:35.125  Async Event Request Limit:             4
00:10:35.125  Number of Firmware Slots:              N/A
00:10:35.125  Firmware Slot 1 Read-Only:             N/A
00:10:35.125  Firmware Activation Without Reset:     N/A
00:10:35.125  Multiple Update Detection Support:     N/A
00:10:35.125  Firmware Update Granularity:           No Information Provided
00:10:35.125  Per-Namespace SMART Log:               Yes
00:10:35.125  Asymmetric Namespace Access Log Page:  Not Supported
00:10:35.125  Subsystem NQN:                         nqn.2019-08.org.qemu:12342
00:10:35.125  Command Effects Log Page:              Supported
00:10:35.125  Get Log Page Extended Data:            Supported
00:10:35.125  Telemetry Log Pages:                   Not Supported
00:10:35.125  Persistent Event Log Pages:            Not Supported
00:10:35.125  Supported Log Pages Log Page:          May Support
00:10:35.125  Commands Supported & Effects Log Page: Not Supported
00:10:35.125  Feature Identifiers & Effects Log Page:May Support
00:10:35.125  NVMe-MI Commands & Effects Log Page:   May Support
00:10:35.125  Data Area 4 for Telemetry Log:         Not Supported
00:10:35.125  Error Log Page Entries Supported:      1
00:10:35.125  Keep Alive:                            Not Supported
00:10:35.125  
00:10:35.125  NVM Command Set Attributes
00:10:35.125  ==========================
00:10:35.125  Submission Queue Entry Size
00:10:35.125    Max:                       64
00:10:35.125    Min:                       64
00:10:35.125  Completion Queue Entry Size
00:10:35.125    Max:                       16
00:10:35.125    Min:                       16
00:10:35.125  Number of Namespaces:        256
00:10:35.125  Compare Command:             Supported
00:10:35.125  Write Uncorrectable Command: Not Supported
00:10:35.125  Dataset Management Command:  Supported
00:10:35.125  Write Zeroes Command:        Supported
00:10:35.125  Set Features Save Field:     Supported
00:10:35.125  Reservations:                Not Supported
00:10:35.125  Timestamp:                   Supported
00:10:35.125  Copy:                        Supported
00:10:35.125  Volatile Write Cache:        Present
00:10:35.125  Atomic Write Unit (Normal):  1
00:10:35.125  Atomic Write Unit (PFail):   1
00:10:35.125  Atomic Compare & Write Unit: 1
00:10:35.125  Fused Compare & Write:       Not Supported
00:10:35.125  Scatter-Gather List
00:10:35.125    SGL Command Set:           Supported
00:10:35.125    SGL Keyed:                 Not Supported
00:10:35.125    SGL Bit Bucket Descriptor: Not Supported
00:10:35.125    SGL Metadata Pointer:      Not Supported
00:10:35.125    Oversized SGL:             Not Supported
00:10:35.125    SGL Metadata Address:      Not Supported
00:10:35.125    SGL Offset:                Not Supported
00:10:35.125    Transport SGL Data Block:  Not Supported
00:10:35.125  Replay Protected Memory Block:  Not Supported
00:10:35.125  
00:10:35.125  Firmware Slot Information
00:10:35.125  =========================
00:10:35.125  Active slot:                 1
00:10:35.125  Slot 1 Firmware Revision:    1.0
00:10:35.125  
00:10:35.125  
00:10:35.125  Commands Supported and Effects
00:10:35.125  ==============================
00:10:35.125  Admin Commands
00:10:35.125  --------------
00:10:35.125     Delete I/O Submission Queue (00h): Supported 
00:10:35.125     Create I/O Submission Queue (01h): Supported 
00:10:35.125                    Get Log Page (02h): Supported 
00:10:35.125     Delete I/O Completion Queue (04h): Supported 
00:10:35.125     Create I/O Completion Queue (05h): Supported 
00:10:35.125                        Identify (06h): Supported 
00:10:35.125                           Abort (08h): Supported 
00:10:35.125                    Set Features (09h): Supported 
00:10:35.125                    Get Features (0Ah): Supported 
00:10:35.125      Asynchronous Event Request (0Ch): Supported 
00:10:35.125            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:35.125                  Directive Send (19h): Supported 
00:10:35.125               Directive Receive (1Ah): Supported 
00:10:35.125       Virtualization Management (1Ch): Supported 
00:10:35.125          Doorbell Buffer Config (7Ch): Supported 
00:10:35.125                      Format NVM (80h): Supported LBA-Change 
00:10:35.125  I/O Commands
00:10:35.125  ------------
00:10:35.125                           Flush (00h): Supported LBA-Change 
00:10:35.125                           Write (01h): Supported LBA-Change 
00:10:35.125                            Read (02h): Supported 
00:10:35.125                         Compare (05h): Supported 
00:10:35.125                    Write Zeroes (08h): Supported LBA-Change 
00:10:35.125              Dataset Management (09h): Supported LBA-Change 
00:10:35.125                         Unknown (0Ch): Supported 
00:10:35.125                         Unknown (12h): Supported 
00:10:35.125                            Copy (19h): Supported LBA-Change 
00:10:35.125                         Unknown (1Dh): Supported LBA-Change 
00:10:35.125  
00:10:35.125  Error Log
00:10:35.125  =========
00:10:35.125  
00:10:35.125  Arbitration
00:10:35.125  ===========
00:10:35.125  Arbitration Burst:           no limit
00:10:35.125  
00:10:35.125  Power Management
00:10:35.125  ================
00:10:35.125  Number of Power States:          1
00:10:35.125  Current Power State:             Power State #0
00:10:35.125  Power State #0:
00:10:35.125    Max Power:                     25.00 W
00:10:35.125    Non-Operational State:         Operational
00:10:35.125    Entry Latency:                 16 microseconds
00:10:35.125    Exit Latency:                  4 microseconds
00:10:35.125    Relative Read Throughput:      0
00:10:35.125    Relative Read Latency:         0
00:10:35.125    Relative Write Throughput:     0
00:10:35.125    Relative Write Latency:        0
00:10:35.125    Idle Power:                     Not Reported
00:10:35.125    Active Power:                   Not Reported
00:10:35.125  Non-Operational Permissive Mode: Not Supported
00:10:35.125  
00:10:35.125  Health Information
00:10:35.125  ==================
00:10:35.125  Critical Warnings:
00:10:35.125    Available Spare Space:     OK
00:10:35.125    Temperature:               OK
00:10:35.125    Device Reliability:        OK
00:10:35.125    Read Only:                 No
00:10:35.125    Volatile Memory Backup:    OK
00:10:35.125  Current Temperature:         323 Kelvin (50 Celsius)
00:10:35.125  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:35.125  Available Spare:             0%
00:10:35.125  Available Spare Threshold:   0%
00:10:35.125  Life Percentage Used:        0%
00:10:35.125  Data Units Read:             2571
00:10:35.125  Data Units Written:          2358
00:10:35.125  Host Read Commands:          120584
00:10:35.125  Host Write Commands:         118853
00:10:35.125  Controller Busy Time:        0 minutes
00:10:35.125  Power Cycles:                0
00:10:35.125  Power On Hours:              0 hours
00:10:35.125  Unsafe Shutdowns:            0
00:10:35.125  Unrecoverable Media Errors:  0
00:10:35.125  Lifetime Error Log Entries:  0
00:10:35.125  Warning Temperature Time:    0 minutes
00:10:35.125  Critical Temperature Time:   0 minutes
00:10:35.125  
00:10:35.125  Number of Queues
00:10:35.125  ================
00:10:35.125  Number of I/O Submission Queues:      64
00:10:35.125  Number of I/O Completion Queues:      64
00:10:35.125  
00:10:35.125  ZNS Specific Controller Data
00:10:35.125  ============================
00:10:35.125  Zone Append Size Limit:      0
00:10:35.125  
00:10:35.125  
00:10:35.125  Active Namespaces
00:10:35.125  =================
00:10:35.125  Namespace ID:1
00:10:35.125  Error Recovery Timeout:                Unlimited
00:10:35.125  Command Set Identifier:                NVM (00h)
00:10:35.125  Deallocate:                            Supported
00:10:35.125  Deallocated/Unwritten Error:           Supported
00:10:35.125  Deallocated Read Value:                All 0x00
00:10:35.125  Deallocate in Write Zeroes:            Not Supported
00:10:35.125  Deallocated Guard Field:               0xFFFF
00:10:35.125  Flush:                                 Supported
00:10:35.125  Reservation:                           Not Supported
00:10:35.125  Namespace Sharing Capabilities:        Private
00:10:35.125  Size (in LBAs):                        1048576 (4GiB)
00:10:35.125  Capacity (in LBAs):                    1048576 (4GiB)
00:10:35.125  Utilization (in LBAs):                 1048576 (4GiB)
00:10:35.125  Thin Provisioning:                     Not Supported
00:10:35.125  Per-NS Atomic Units:                   No
00:10:35.125  Maximum Single Source Range Length:    128
00:10:35.125  Maximum Copy Length:                   128
00:10:35.125  Maximum Source Range Count:            128
00:10:35.125  NGUID/EUI64 Never Reused:              No
00:10:35.125  Namespace Write Protected:             No
00:10:35.125  Number of LBA Formats:                 8
00:10:35.125  Current LBA Format:                    LBA Format #04
00:10:35.125  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:35.125  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:35.125  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:35.125  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:35.125  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:35.125  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:35.125  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:35.125  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:35.125  
00:10:35.125  NVM Specific Namespace Data
00:10:35.125  ===========================
00:10:35.125  Logical Block Storage Tag Mask:               0
00:10:35.125  Protection Information Capabilities:
00:10:35.125    16b Guard Protection Information Storage Tag Support:  No
00:10:35.125    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:35.125    Storage Tag Check Read Support:                        No
00:10:35.125  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Namespace ID:2
00:10:35.126  Error Recovery Timeout:                Unlimited
00:10:35.126  Command Set Identifier:                NVM (00h)
00:10:35.126  Deallocate:                            Supported
00:10:35.126  Deallocated/Unwritten Error:           Supported
00:10:35.126  Deallocated Read Value:                All 0x00
00:10:35.126  Deallocate in Write Zeroes:            Not Supported
00:10:35.126  Deallocated Guard Field:               0xFFFF
00:10:35.126  Flush:                                 Supported
00:10:35.126  Reservation:                           Not Supported
00:10:35.126  Namespace Sharing Capabilities:        Private
00:10:35.126  Size (in LBAs):                        1048576 (4GiB)
00:10:35.126  Capacity (in LBAs):                    1048576 (4GiB)
00:10:35.126  Utilization (in LBAs):                 1048576 (4GiB)
00:10:35.126  Thin Provisioning:                     Not Supported
00:10:35.126  Per-NS Atomic Units:                   No
00:10:35.126  Maximum Single Source Range Length:    128
00:10:35.126  Maximum Copy Length:                   128
00:10:35.126  Maximum Source Range Count:            128
00:10:35.126  NGUID/EUI64 Never Reused:              No
00:10:35.126  Namespace Write Protected:             No
00:10:35.126  Number of LBA Formats:                 8
00:10:35.126  Current LBA Format:                    LBA Format #04
00:10:35.126  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:35.126  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:35.126  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:35.126  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:35.126  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:35.126  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:35.126  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:35.126  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:35.126  
00:10:35.126  NVM Specific Namespace Data
00:10:35.126  ===========================
00:10:35.126  Logical Block Storage Tag Mask:               0
00:10:35.126  Protection Information Capabilities:
00:10:35.126    16b Guard Protection Information Storage Tag Support:  No
00:10:35.126    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:35.126    Storage Tag Check Read Support:                        No
00:10:35.126  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Namespace ID:3
00:10:35.126  Error Recovery Timeout:                Unlimited
00:10:35.126  Command Set Identifier:                NVM (00h)
00:10:35.126  Deallocate:                            Supported
00:10:35.126  Deallocated/Unwritten Error:           Supported
00:10:35.126  Deallocated Read Value:                All 0x00
00:10:35.126  Deallocate in Write Zeroes:            Not Supported
00:10:35.126  Deallocated Guard Field:               0xFFFF
00:10:35.126  Flush:                                 Supported
00:10:35.126  Reservation:                           Not Supported
00:10:35.126  Namespace Sharing Capabilities:        Private
00:10:35.126  Size (in LBAs):                        1048576 (4GiB)
00:10:35.126  Capacity (in LBAs):                    1048576 (4GiB)
00:10:35.126  Utilization (in LBAs):                 1048576 (4GiB)
00:10:35.126  Thin Provisioning:                     Not Supported
00:10:35.126  Per-NS Atomic Units:                   No
00:10:35.126  Maximum Single Source Range Length:    128
00:10:35.126  Maximum Copy Length:                   128
00:10:35.126  Maximum Source Range Count:            128
00:10:35.126  NGUID/EUI64 Never Reused:              No
00:10:35.126  Namespace Write Protected:             No
00:10:35.126  Number of LBA Formats:                 8
00:10:35.126  Current LBA Format:                    LBA Format #04
00:10:35.126  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:35.126  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:35.126  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:35.126  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:35.126  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:35.126  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:35.126  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:35.126  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:35.126  
00:10:35.126  NVM Specific Namespace Data
00:10:35.126  ===========================
00:10:35.126  Logical Block Storage Tag Mask:               0
00:10:35.126  Protection Information Capabilities:
00:10:35.126    16b Guard Protection Information Storage Tag Support:  No
00:10:35.126    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:35.126    Storage Tag Check Read Support:                        No
00:10:35.126  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.126   16:21:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:10:35.126   16:21:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0
00:10:35.386  =====================================================
00:10:35.386  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:10:35.386  =====================================================
00:10:35.386  Controller Capabilities/Features
00:10:35.386  ================================
00:10:35.386  Vendor ID:                             1b36
00:10:35.386  Subsystem Vendor ID:                   1af4
00:10:35.386  Serial Number:                         12340
00:10:35.386  Model Number:                          QEMU NVMe Ctrl
00:10:35.386  Firmware Version:                      8.0.0
00:10:35.386  Recommended Arb Burst:                 6
00:10:35.386  IEEE OUI Identifier:                   00 54 52
00:10:35.386  Multi-path I/O
00:10:35.386    May have multiple subsystem ports:   No
00:10:35.386    May have multiple controllers:       No
00:10:35.386    Associated with SR-IOV VF:           No
00:10:35.386  Max Data Transfer Size:                524288
00:10:35.386  Max Number of Namespaces:              256
00:10:35.386  Max Number of I/O Queues:              64
00:10:35.386  NVMe Specification Version (VS):       1.4
00:10:35.386  NVMe Specification Version (Identify): 1.4
00:10:35.386  Maximum Queue Entries:                 2048
00:10:35.386  Contiguous Queues Required:            Yes
00:10:35.386  Arbitration Mechanisms Supported
00:10:35.386    Weighted Round Robin:                Not Supported
00:10:35.386    Vendor Specific:                     Not Supported
00:10:35.386  Reset Timeout:                         7500 ms
00:10:35.386  Doorbell Stride:                       4 bytes
00:10:35.386  NVM Subsystem Reset:                   Not Supported
00:10:35.386  Command Sets Supported
00:10:35.386    NVM Command Set:                     Supported
00:10:35.386  Boot Partition:                        Not Supported
00:10:35.386  Memory Page Size Minimum:              4096 bytes
00:10:35.386  Memory Page Size Maximum:              65536 bytes
00:10:35.386  Persistent Memory Region:              Not Supported
00:10:35.386  Optional Asynchronous Events Supported
00:10:35.386    Namespace Attribute Notices:         Supported
00:10:35.386    Firmware Activation Notices:         Not Supported
00:10:35.386    ANA Change Notices:                  Not Supported
00:10:35.386    PLE Aggregate Log Change Notices:    Not Supported
00:10:35.386    LBA Status Info Alert Notices:       Not Supported
00:10:35.386    EGE Aggregate Log Change Notices:    Not Supported
00:10:35.386    Normal NVM Subsystem Shutdown event: Not Supported
00:10:35.386    Zone Descriptor Change Notices:      Not Supported
00:10:35.386    Discovery Log Change Notices:        Not Supported
00:10:35.386  Controller Attributes
00:10:35.386    128-bit Host Identifier:             Not Supported
00:10:35.386    Non-Operational Permissive Mode:     Not Supported
00:10:35.386    NVM Sets:                            Not Supported
00:10:35.386    Read Recovery Levels:                Not Supported
00:10:35.386    Endurance Groups:                    Not Supported
00:10:35.386    Predictable Latency Mode:            Not Supported
00:10:35.386    Traffic Based Keep ALive:            Not Supported
00:10:35.386    Namespace Granularity:               Not Supported
00:10:35.386    SQ Associations:                     Not Supported
00:10:35.386    UUID List:                           Not Supported
00:10:35.386    Multi-Domain Subsystem:              Not Supported
00:10:35.386    Fixed Capacity Management:           Not Supported
00:10:35.386    Variable Capacity Management:        Not Supported
00:10:35.386    Delete Endurance Group:              Not Supported
00:10:35.386    Delete NVM Set:                      Not Supported
00:10:35.386    Extended LBA Formats Supported:      Supported
00:10:35.386    Flexible Data Placement Supported:   Not Supported
00:10:35.386  
00:10:35.386  Controller Memory Buffer Support
00:10:35.386  ================================
00:10:35.386  Supported:                             No
00:10:35.386  
00:10:35.386  Persistent Memory Region Support
00:10:35.386  ================================
00:10:35.386  Supported:                             No
00:10:35.386  
00:10:35.386  Admin Command Set Attributes
00:10:35.386  ============================
00:10:35.386  Security Send/Receive:                 Not Supported
00:10:35.386  Format NVM:                            Supported
00:10:35.386  Firmware Activate/Download:            Not Supported
00:10:35.386  Namespace Management:                  Supported
00:10:35.386  Device Self-Test:                      Not Supported
00:10:35.386  Directives:                            Supported
00:10:35.386  NVMe-MI:                               Not Supported
00:10:35.386  Virtualization Management:             Not Supported
00:10:35.386  Doorbell Buffer Config:                Supported
00:10:35.386  Get LBA Status Capability:             Not Supported
00:10:35.386  Command & Feature Lockdown Capability: Not Supported
00:10:35.386  Abort Command Limit:                   4
00:10:35.386  Async Event Request Limit:             4
00:10:35.386  Number of Firmware Slots:              N/A
00:10:35.386  Firmware Slot 1 Read-Only:             N/A
00:10:35.386  Firmware Activation Without Reset:     N/A
00:10:35.386  Multiple Update Detection Support:     N/A
00:10:35.386  Firmware Update Granularity:           No Information Provided
00:10:35.386  Per-Namespace SMART Log:               Yes
00:10:35.386  Asymmetric Namespace Access Log Page:  Not Supported
00:10:35.386  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:10:35.386  Command Effects Log Page:              Supported
00:10:35.386  Get Log Page Extended Data:            Supported
00:10:35.386  Telemetry Log Pages:                   Not Supported
00:10:35.386  Persistent Event Log Pages:            Not Supported
00:10:35.386  Supported Log Pages Log Page:          May Support
00:10:35.386  Commands Supported & Effects Log Page: Not Supported
00:10:35.386  Feature Identifiers & Effects Log Page:May Support
00:10:35.386  NVMe-MI Commands & Effects Log Page:   May Support
00:10:35.386  Data Area 4 for Telemetry Log:         Not Supported
00:10:35.386  Error Log Page Entries Supported:      1
00:10:35.386  Keep Alive:                            Not Supported
00:10:35.386  
00:10:35.386  NVM Command Set Attributes
00:10:35.386  ==========================
00:10:35.386  Submission Queue Entry Size
00:10:35.386    Max:                       64
00:10:35.386    Min:                       64
00:10:35.386  Completion Queue Entry Size
00:10:35.386    Max:                       16
00:10:35.386    Min:                       16
00:10:35.386  Number of Namespaces:        256
00:10:35.386  Compare Command:             Supported
00:10:35.386  Write Uncorrectable Command: Not Supported
00:10:35.386  Dataset Management Command:  Supported
00:10:35.386  Write Zeroes Command:        Supported
00:10:35.386  Set Features Save Field:     Supported
00:10:35.386  Reservations:                Not Supported
00:10:35.386  Timestamp:                   Supported
00:10:35.386  Copy:                        Supported
00:10:35.386  Volatile Write Cache:        Present
00:10:35.386  Atomic Write Unit (Normal):  1
00:10:35.386  Atomic Write Unit (PFail):   1
00:10:35.386  Atomic Compare & Write Unit: 1
00:10:35.386  Fused Compare & Write:       Not Supported
00:10:35.386  Scatter-Gather List
00:10:35.386    SGL Command Set:           Supported
00:10:35.386    SGL Keyed:                 Not Supported
00:10:35.386    SGL Bit Bucket Descriptor: Not Supported
00:10:35.386    SGL Metadata Pointer:      Not Supported
00:10:35.386    Oversized SGL:             Not Supported
00:10:35.386    SGL Metadata Address:      Not Supported
00:10:35.386    SGL Offset:                Not Supported
00:10:35.386    Transport SGL Data Block:  Not Supported
00:10:35.386  Replay Protected Memory Block:  Not Supported
00:10:35.386  
00:10:35.386  Firmware Slot Information
00:10:35.386  =========================
00:10:35.386  Active slot:                 1
00:10:35.386  Slot 1 Firmware Revision:    1.0
00:10:35.386  
00:10:35.386  
00:10:35.386  Commands Supported and Effects
00:10:35.386  ==============================
00:10:35.386  Admin Commands
00:10:35.386  --------------
00:10:35.386     Delete I/O Submission Queue (00h): Supported 
00:10:35.386     Create I/O Submission Queue (01h): Supported 
00:10:35.386                    Get Log Page (02h): Supported 
00:10:35.386     Delete I/O Completion Queue (04h): Supported 
00:10:35.386     Create I/O Completion Queue (05h): Supported 
00:10:35.386                        Identify (06h): Supported 
00:10:35.386                           Abort (08h): Supported 
00:10:35.386                    Set Features (09h): Supported 
00:10:35.386                    Get Features (0Ah): Supported 
00:10:35.386      Asynchronous Event Request (0Ch): Supported 
00:10:35.386            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:35.386                  Directive Send (19h): Supported 
00:10:35.386               Directive Receive (1Ah): Supported 
00:10:35.386       Virtualization Management (1Ch): Supported 
00:10:35.386          Doorbell Buffer Config (7Ch): Supported 
00:10:35.386                      Format NVM (80h): Supported LBA-Change 
00:10:35.386  I/O Commands
00:10:35.386  ------------
00:10:35.386                           Flush (00h): Supported LBA-Change 
00:10:35.386                           Write (01h): Supported LBA-Change 
00:10:35.386                            Read (02h): Supported 
00:10:35.386                         Compare (05h): Supported 
00:10:35.386                    Write Zeroes (08h): Supported LBA-Change 
00:10:35.386              Dataset Management (09h): Supported LBA-Change 
00:10:35.386                         Unknown (0Ch): Supported 
00:10:35.387                         Unknown (12h): Supported 
00:10:35.387                            Copy (19h): Supported LBA-Change 
00:10:35.387                         Unknown (1Dh): Supported LBA-Change 
00:10:35.387  
00:10:35.387  Error Log
00:10:35.387  =========
00:10:35.387  
00:10:35.387  Arbitration
00:10:35.387  ===========
00:10:35.387  Arbitration Burst:           no limit
00:10:35.387  
00:10:35.387  Power Management
00:10:35.387  ================
00:10:35.387  Number of Power States:          1
00:10:35.387  Current Power State:             Power State #0
00:10:35.387  Power State #0:
00:10:35.387    Max Power:                     25.00 W
00:10:35.387    Non-Operational State:         Operational
00:10:35.387    Entry Latency:                 16 microseconds
00:10:35.387    Exit Latency:                  4 microseconds
00:10:35.387    Relative Read Throughput:      0
00:10:35.387    Relative Read Latency:         0
00:10:35.387    Relative Write Throughput:     0
00:10:35.387    Relative Write Latency:        0
00:10:35.387    Idle Power:                     Not Reported
00:10:35.387    Active Power:                   Not Reported
00:10:35.387  Non-Operational Permissive Mode: Not Supported
00:10:35.387  
00:10:35.387  Health Information
00:10:35.387  ==================
00:10:35.387  Critical Warnings:
00:10:35.387    Available Spare Space:     OK
00:10:35.387    Temperature:               OK
00:10:35.387    Device Reliability:        OK
00:10:35.387    Read Only:                 No
00:10:35.387    Volatile Memory Backup:    OK
00:10:35.387  Current Temperature:         323 Kelvin (50 Celsius)
00:10:35.387  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:35.387  Available Spare:             0%
00:10:35.387  Available Spare Threshold:   0%
00:10:35.387  Life Percentage Used:        0%
00:10:35.387  Data Units Read:             810
00:10:35.387  Data Units Written:          738
00:10:35.387  Host Read Commands:          39410
00:10:35.387  Host Write Commands:         39196
00:10:35.387  Controller Busy Time:        0 minutes
00:10:35.387  Power Cycles:                0
00:10:35.387  Power On Hours:              0 hours
00:10:35.387  Unsafe Shutdowns:            0
00:10:35.387  Unrecoverable Media Errors:  0
00:10:35.387  Lifetime Error Log Entries:  0
00:10:35.387  Warning Temperature Time:    0 minutes
00:10:35.387  Critical Temperature Time:   0 minutes
00:10:35.387  
00:10:35.387  Number of Queues
00:10:35.387  ================
00:10:35.387  Number of I/O Submission Queues:      64
00:10:35.387  Number of I/O Completion Queues:      64
00:10:35.387  
00:10:35.387  ZNS Specific Controller Data
00:10:35.387  ============================
00:10:35.387  Zone Append Size Limit:      0
00:10:35.387  
00:10:35.387  
00:10:35.387  Active Namespaces
00:10:35.387  =================
00:10:35.387  Namespace ID:1
00:10:35.387  Error Recovery Timeout:                Unlimited
00:10:35.387  Command Set Identifier:                NVM (00h)
00:10:35.387  Deallocate:                            Supported
00:10:35.387  Deallocated/Unwritten Error:           Supported
00:10:35.387  Deallocated Read Value:                All 0x00
00:10:35.387  Deallocate in Write Zeroes:            Not Supported
00:10:35.387  Deallocated Guard Field:               0xFFFF
00:10:35.387  Flush:                                 Supported
00:10:35.387  Reservation:                           Not Supported
00:10:35.387  Metadata Transferred as:               Separate Metadata Buffer
00:10:35.387  Namespace Sharing Capabilities:        Private
00:10:35.387  Size (in LBAs):                        1548666 (5GiB)
00:10:35.387  Capacity (in LBAs):                    1548666 (5GiB)
00:10:35.387  Utilization (in LBAs):                 1548666 (5GiB)
00:10:35.387  Thin Provisioning:                     Not Supported
00:10:35.387  Per-NS Atomic Units:                   No
00:10:35.387  Maximum Single Source Range Length:    128
00:10:35.387  Maximum Copy Length:                   128
00:10:35.387  Maximum Source Range Count:            128
00:10:35.387  NGUID/EUI64 Never Reused:              No
00:10:35.387  Namespace Write Protected:             No
00:10:35.387  Number of LBA Formats:                 8
00:10:35.387  Current LBA Format:                    LBA Format #07
00:10:35.387  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:35.387  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:35.387  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:35.387  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:35.387  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:35.387  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:35.387  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:35.387  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:35.387  
00:10:35.387  NVM Specific Namespace Data
00:10:35.387  ===========================
00:10:35.387  Logical Block Storage Tag Mask:               0
00:10:35.387  Protection Information Capabilities:
00:10:35.387    16b Guard Protection Information Storage Tag Support:  No
00:10:35.387    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:35.387    Storage Tag Check Read Support:                        No
00:10:35.387  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.387  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.387  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.387  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.387  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.387  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.387  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.387  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.647   16:21:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:10:35.647   16:21:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0
00:10:35.647  =====================================================
00:10:35.647  NVMe Controller at 0000:00:11.0 [1b36:0010]
00:10:35.647  =====================================================
00:10:35.647  Controller Capabilities/Features
00:10:35.647  ================================
00:10:35.647  Vendor ID:                             1b36
00:10:35.647  Subsystem Vendor ID:                   1af4
00:10:35.647  Serial Number:                         12341
00:10:35.648  Model Number:                          QEMU NVMe Ctrl
00:10:35.648  Firmware Version:                      8.0.0
00:10:35.648  Recommended Arb Burst:                 6
00:10:35.648  IEEE OUI Identifier:                   00 54 52
00:10:35.648  Multi-path I/O
00:10:35.648    May have multiple subsystem ports:   No
00:10:35.648    May have multiple controllers:       No
00:10:35.648    Associated with SR-IOV VF:           No
00:10:35.648  Max Data Transfer Size:                524288
00:10:35.648  Max Number of Namespaces:              256
00:10:35.648  Max Number of I/O Queues:              64
00:10:35.648  NVMe Specification Version (VS):       1.4
00:10:35.648  NVMe Specification Version (Identify): 1.4
00:10:35.648  Maximum Queue Entries:                 2048
00:10:35.648  Contiguous Queues Required:            Yes
00:10:35.648  Arbitration Mechanisms Supported
00:10:35.648    Weighted Round Robin:                Not Supported
00:10:35.648    Vendor Specific:                     Not Supported
00:10:35.648  Reset Timeout:                         7500 ms
00:10:35.648  Doorbell Stride:                       4 bytes
00:10:35.648  NVM Subsystem Reset:                   Not Supported
00:10:35.648  Command Sets Supported
00:10:35.648    NVM Command Set:                     Supported
00:10:35.648  Boot Partition:                        Not Supported
00:10:35.648  Memory Page Size Minimum:              4096 bytes
00:10:35.648  Memory Page Size Maximum:              65536 bytes
00:10:35.648  Persistent Memory Region:              Not Supported
00:10:35.648  Optional Asynchronous Events Supported
00:10:35.648    Namespace Attribute Notices:         Supported
00:10:35.648    Firmware Activation Notices:         Not Supported
00:10:35.648    ANA Change Notices:                  Not Supported
00:10:35.648    PLE Aggregate Log Change Notices:    Not Supported
00:10:35.648    LBA Status Info Alert Notices:       Not Supported
00:10:35.648    EGE Aggregate Log Change Notices:    Not Supported
00:10:35.648    Normal NVM Subsystem Shutdown event: Not Supported
00:10:35.648    Zone Descriptor Change Notices:      Not Supported
00:10:35.648    Discovery Log Change Notices:        Not Supported
00:10:35.648  Controller Attributes
00:10:35.648    128-bit Host Identifier:             Not Supported
00:10:35.648    Non-Operational Permissive Mode:     Not Supported
00:10:35.648    NVM Sets:                            Not Supported
00:10:35.648    Read Recovery Levels:                Not Supported
00:10:35.648    Endurance Groups:                    Not Supported
00:10:35.648    Predictable Latency Mode:            Not Supported
00:10:35.648    Traffic Based Keep ALive:            Not Supported
00:10:35.648    Namespace Granularity:               Not Supported
00:10:35.648    SQ Associations:                     Not Supported
00:10:35.648    UUID List:                           Not Supported
00:10:35.648    Multi-Domain Subsystem:              Not Supported
00:10:35.648    Fixed Capacity Management:           Not Supported
00:10:35.648    Variable Capacity Management:        Not Supported
00:10:35.648    Delete Endurance Group:              Not Supported
00:10:35.648    Delete NVM Set:                      Not Supported
00:10:35.648    Extended LBA Formats Supported:      Supported
00:10:35.648    Flexible Data Placement Supported:   Not Supported
00:10:35.648  
00:10:35.648  Controller Memory Buffer Support
00:10:35.648  ================================
00:10:35.648  Supported:                             No
00:10:35.648  
00:10:35.648  Persistent Memory Region Support
00:10:35.648  ================================
00:10:35.648  Supported:                             No
00:10:35.648  
00:10:35.648  Admin Command Set Attributes
00:10:35.648  ============================
00:10:35.648  Security Send/Receive:                 Not Supported
00:10:35.648  Format NVM:                            Supported
00:10:35.648  Firmware Activate/Download:            Not Supported
00:10:35.648  Namespace Management:                  Supported
00:10:35.648  Device Self-Test:                      Not Supported
00:10:35.648  Directives:                            Supported
00:10:35.648  NVMe-MI:                               Not Supported
00:10:35.648  Virtualization Management:             Not Supported
00:10:35.648  Doorbell Buffer Config:                Supported
00:10:35.648  Get LBA Status Capability:             Not Supported
00:10:35.648  Command & Feature Lockdown Capability: Not Supported
00:10:35.648  Abort Command Limit:                   4
00:10:35.648  Async Event Request Limit:             4
00:10:35.648  Number of Firmware Slots:              N/A
00:10:35.648  Firmware Slot 1 Read-Only:             N/A
00:10:35.648  Firmware Activation Without Reset:     N/A
00:10:35.648  Multiple Update Detection Support:     N/A
00:10:35.648  Firmware Update Granularity:           No Information Provided
00:10:35.648  Per-Namespace SMART Log:               Yes
00:10:35.648  Asymmetric Namespace Access Log Page:  Not Supported
00:10:35.648  Subsystem NQN:                         nqn.2019-08.org.qemu:12341
00:10:35.648  Command Effects Log Page:              Supported
00:10:35.648  Get Log Page Extended Data:            Supported
00:10:35.648  Telemetry Log Pages:                   Not Supported
00:10:35.648  Persistent Event Log Pages:            Not Supported
00:10:35.648  Supported Log Pages Log Page:          May Support
00:10:35.648  Commands Supported & Effects Log Page: Not Supported
00:10:35.648  Feature Identifiers & Effects Log Page:May Support
00:10:35.648  NVMe-MI Commands & Effects Log Page:   May Support
00:10:35.648  Data Area 4 for Telemetry Log:         Not Supported
00:10:35.648  Error Log Page Entries Supported:      1
00:10:35.648  Keep Alive:                            Not Supported
00:10:35.648  
00:10:35.648  NVM Command Set Attributes
00:10:35.648  ==========================
00:10:35.648  Submission Queue Entry Size
00:10:35.648    Max:                       64
00:10:35.648    Min:                       64
00:10:35.648  Completion Queue Entry Size
00:10:35.648    Max:                       16
00:10:35.648    Min:                       16
00:10:35.648  Number of Namespaces:        256
00:10:35.648  Compare Command:             Supported
00:10:35.648  Write Uncorrectable Command: Not Supported
00:10:35.648  Dataset Management Command:  Supported
00:10:35.648  Write Zeroes Command:        Supported
00:10:35.648  Set Features Save Field:     Supported
00:10:35.648  Reservations:                Not Supported
00:10:35.648  Timestamp:                   Supported
00:10:35.648  Copy:                        Supported
00:10:35.648  Volatile Write Cache:        Present
00:10:35.648  Atomic Write Unit (Normal):  1
00:10:35.648  Atomic Write Unit (PFail):   1
00:10:35.648  Atomic Compare & Write Unit: 1
00:10:35.648  Fused Compare & Write:       Not Supported
00:10:35.648  Scatter-Gather List
00:10:35.648    SGL Command Set:           Supported
00:10:35.648    SGL Keyed:                 Not Supported
00:10:35.648    SGL Bit Bucket Descriptor: Not Supported
00:10:35.648    SGL Metadata Pointer:      Not Supported
00:10:35.648    Oversized SGL:             Not Supported
00:10:35.648    SGL Metadata Address:      Not Supported
00:10:35.648    SGL Offset:                Not Supported
00:10:35.648    Transport SGL Data Block:  Not Supported
00:10:35.648  Replay Protected Memory Block:  Not Supported
00:10:35.648  
00:10:35.648  Firmware Slot Information
00:10:35.648  =========================
00:10:35.648  Active slot:                 1
00:10:35.648  Slot 1 Firmware Revision:    1.0
00:10:35.648  
00:10:35.648  
00:10:35.648  Commands Supported and Effects
00:10:35.648  ==============================
00:10:35.648  Admin Commands
00:10:35.648  --------------
00:10:35.648     Delete I/O Submission Queue (00h): Supported 
00:10:35.648     Create I/O Submission Queue (01h): Supported 
00:10:35.648                    Get Log Page (02h): Supported 
00:10:35.648     Delete I/O Completion Queue (04h): Supported 
00:10:35.648     Create I/O Completion Queue (05h): Supported 
00:10:35.648                        Identify (06h): Supported 
00:10:35.648                           Abort (08h): Supported 
00:10:35.648                    Set Features (09h): Supported 
00:10:35.648                    Get Features (0Ah): Supported 
00:10:35.648      Asynchronous Event Request (0Ch): Supported 
00:10:35.648            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:35.648                  Directive Send (19h): Supported 
00:10:35.648               Directive Receive (1Ah): Supported 
00:10:35.648       Virtualization Management (1Ch): Supported 
00:10:35.648          Doorbell Buffer Config (7Ch): Supported 
00:10:35.648                      Format NVM (80h): Supported LBA-Change 
00:10:35.648  I/O Commands
00:10:35.648  ------------
00:10:35.648                           Flush (00h): Supported LBA-Change 
00:10:35.648                           Write (01h): Supported LBA-Change 
00:10:35.648                            Read (02h): Supported 
00:10:35.648                         Compare (05h): Supported 
00:10:35.648                    Write Zeroes (08h): Supported LBA-Change 
00:10:35.648              Dataset Management (09h): Supported LBA-Change 
00:10:35.648                         Unknown (0Ch): Supported 
00:10:35.648                         Unknown (12h): Supported 
00:10:35.648                            Copy (19h): Supported LBA-Change 
00:10:35.648                         Unknown (1Dh): Supported LBA-Change 
00:10:35.648  
00:10:35.648  Error Log
00:10:35.648  =========
00:10:35.648  
00:10:35.648  Arbitration
00:10:35.648  ===========
00:10:35.648  Arbitration Burst:           no limit
00:10:35.648  
00:10:35.648  Power Management
00:10:35.648  ================
00:10:35.648  Number of Power States:          1
00:10:35.648  Current Power State:             Power State #0
00:10:35.648  Power State #0:
00:10:35.648    Max Power:                     25.00 W
00:10:35.648    Non-Operational State:         Operational
00:10:35.648    Entry Latency:                 16 microseconds
00:10:35.648    Exit Latency:                  4 microseconds
00:10:35.648    Relative Read Throughput:      0
00:10:35.648    Relative Read Latency:         0
00:10:35.648    Relative Write Throughput:     0
00:10:35.648    Relative Write Latency:        0
00:10:35.908    Idle Power:                     Not Reported
00:10:35.908    Active Power:                   Not Reported
00:10:35.908  Non-Operational Permissive Mode: Not Supported
00:10:35.908  
00:10:35.908  Health Information
00:10:35.908  ==================
00:10:35.908  Critical Warnings:
00:10:35.908    Available Spare Space:     OK
00:10:35.908    Temperature:               OK
00:10:35.908    Device Reliability:        OK
00:10:35.908    Read Only:                 No
00:10:35.908    Volatile Memory Backup:    OK
00:10:35.908  Current Temperature:         323 Kelvin (50 Celsius)
00:10:35.908  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:35.908  Available Spare:             0%
00:10:35.908  Available Spare Threshold:   0%
00:10:35.908  Life Percentage Used:        0%
00:10:35.908  Data Units Read:             1251
00:10:35.908  Data Units Written:          1118
00:10:35.908  Host Read Commands:          58490
00:10:35.908  Host Write Commands:         57283
00:10:35.908  Controller Busy Time:        0 minutes
00:10:35.908  Power Cycles:                0
00:10:35.908  Power On Hours:              0 hours
00:10:35.908  Unsafe Shutdowns:            0
00:10:35.908  Unrecoverable Media Errors:  0
00:10:35.908  Lifetime Error Log Entries:  0
00:10:35.908  Warning Temperature Time:    0 minutes
00:10:35.908  Critical Temperature Time:   0 minutes
00:10:35.908  
00:10:35.908  Number of Queues
00:10:35.908  ================
00:10:35.908  Number of I/O Submission Queues:      64
00:10:35.908  Number of I/O Completion Queues:      64
00:10:35.908  
00:10:35.908  ZNS Specific Controller Data
00:10:35.908  ============================
00:10:35.908  Zone Append Size Limit:      0
00:10:35.908  
00:10:35.908  
00:10:35.908  Active Namespaces
00:10:35.908  =================
00:10:35.908  Namespace ID:1
00:10:35.908  Error Recovery Timeout:                Unlimited
00:10:35.908  Command Set Identifier:                NVM (00h)
00:10:35.908  Deallocate:                            Supported
00:10:35.908  Deallocated/Unwritten Error:           Supported
00:10:35.908  Deallocated Read Value:                All 0x00
00:10:35.908  Deallocate in Write Zeroes:            Not Supported
00:10:35.908  Deallocated Guard Field:               0xFFFF
00:10:35.908  Flush:                                 Supported
00:10:35.908  Reservation:                           Not Supported
00:10:35.908  Namespace Sharing Capabilities:        Private
00:10:35.908  Size (in LBAs):                        1310720 (5GiB)
00:10:35.908  Capacity (in LBAs):                    1310720 (5GiB)
00:10:35.908  Utilization (in LBAs):                 1310720 (5GiB)
00:10:35.908  Thin Provisioning:                     Not Supported
00:10:35.908  Per-NS Atomic Units:                   No
00:10:35.908  Maximum Single Source Range Length:    128
00:10:35.908  Maximum Copy Length:                   128
00:10:35.908  Maximum Source Range Count:            128
00:10:35.908  NGUID/EUI64 Never Reused:              No
00:10:35.908  Namespace Write Protected:             No
00:10:35.908  Number of LBA Formats:                 8
00:10:35.908  Current LBA Format:                    LBA Format #04
00:10:35.908  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:35.908  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:35.908  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:35.908  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:35.908  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:35.908  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:35.908  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:35.908  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:35.908  
00:10:35.908  NVM Specific Namespace Data
00:10:35.908  ===========================
00:10:35.908  Logical Block Storage Tag Mask:               0
00:10:35.908  Protection Information Capabilities:
00:10:35.908    16b Guard Protection Information Storage Tag Support:  No
00:10:35.908    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:35.908    Storage Tag Check Read Support:                        No
00:10:35.908  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.908  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.908  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.908  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.908  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.908  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.908  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.908  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:35.908   16:21:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:10:35.908   16:21:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0
00:10:36.171  =====================================================
00:10:36.171  NVMe Controller at 0000:00:12.0 [1b36:0010]
00:10:36.171  =====================================================
00:10:36.171  Controller Capabilities/Features
00:10:36.171  ================================
00:10:36.171  Vendor ID:                             1b36
00:10:36.171  Subsystem Vendor ID:                   1af4
00:10:36.171  Serial Number:                         12342
00:10:36.171  Model Number:                          QEMU NVMe Ctrl
00:10:36.171  Firmware Version:                      8.0.0
00:10:36.171  Recommended Arb Burst:                 6
00:10:36.171  IEEE OUI Identifier:                   00 54 52
00:10:36.171  Multi-path I/O
00:10:36.171    May have multiple subsystem ports:   No
00:10:36.171    May have multiple controllers:       No
00:10:36.171    Associated with SR-IOV VF:           No
00:10:36.171  Max Data Transfer Size:                524288
00:10:36.171  Max Number of Namespaces:              256
00:10:36.171  Max Number of I/O Queues:              64
00:10:36.171  NVMe Specification Version (VS):       1.4
00:10:36.171  NVMe Specification Version (Identify): 1.4
00:10:36.171  Maximum Queue Entries:                 2048
00:10:36.171  Contiguous Queues Required:            Yes
00:10:36.171  Arbitration Mechanisms Supported
00:10:36.171    Weighted Round Robin:                Not Supported
00:10:36.171    Vendor Specific:                     Not Supported
00:10:36.171  Reset Timeout:                         7500 ms
00:10:36.171  Doorbell Stride:                       4 bytes
00:10:36.171  NVM Subsystem Reset:                   Not Supported
00:10:36.171  Command Sets Supported
00:10:36.171    NVM Command Set:                     Supported
00:10:36.171  Boot Partition:                        Not Supported
00:10:36.171  Memory Page Size Minimum:              4096 bytes
00:10:36.171  Memory Page Size Maximum:              65536 bytes
00:10:36.171  Persistent Memory Region:              Not Supported
00:10:36.171  Optional Asynchronous Events Supported
00:10:36.171    Namespace Attribute Notices:         Supported
00:10:36.171    Firmware Activation Notices:         Not Supported
00:10:36.171    ANA Change Notices:                  Not Supported
00:10:36.171    PLE Aggregate Log Change Notices:    Not Supported
00:10:36.171    LBA Status Info Alert Notices:       Not Supported
00:10:36.171    EGE Aggregate Log Change Notices:    Not Supported
00:10:36.171    Normal NVM Subsystem Shutdown event: Not Supported
00:10:36.171    Zone Descriptor Change Notices:      Not Supported
00:10:36.171    Discovery Log Change Notices:        Not Supported
00:10:36.171  Controller Attributes
00:10:36.171    128-bit Host Identifier:             Not Supported
00:10:36.171    Non-Operational Permissive Mode:     Not Supported
00:10:36.171    NVM Sets:                            Not Supported
00:10:36.171    Read Recovery Levels:                Not Supported
00:10:36.171    Endurance Groups:                    Not Supported
00:10:36.171    Predictable Latency Mode:            Not Supported
00:10:36.171    Traffic Based Keep ALive:            Not Supported
00:10:36.171    Namespace Granularity:               Not Supported
00:10:36.171    SQ Associations:                     Not Supported
00:10:36.171    UUID List:                           Not Supported
00:10:36.171    Multi-Domain Subsystem:              Not Supported
00:10:36.171    Fixed Capacity Management:           Not Supported
00:10:36.171    Variable Capacity Management:        Not Supported
00:10:36.171    Delete Endurance Group:              Not Supported
00:10:36.171    Delete NVM Set:                      Not Supported
00:10:36.171    Extended LBA Formats Supported:      Supported
00:10:36.171    Flexible Data Placement Supported:   Not Supported
00:10:36.171  
00:10:36.171  Controller Memory Buffer Support
00:10:36.171  ================================
00:10:36.171  Supported:                             No
00:10:36.171  
00:10:36.171  Persistent Memory Region Support
00:10:36.171  ================================
00:10:36.171  Supported:                             No
00:10:36.171  
00:10:36.172  Admin Command Set Attributes
00:10:36.172  ============================
00:10:36.172  Security Send/Receive:                 Not Supported
00:10:36.172  Format NVM:                            Supported
00:10:36.172  Firmware Activate/Download:            Not Supported
00:10:36.172  Namespace Management:                  Supported
00:10:36.172  Device Self-Test:                      Not Supported
00:10:36.172  Directives:                            Supported
00:10:36.172  NVMe-MI:                               Not Supported
00:10:36.172  Virtualization Management:             Not Supported
00:10:36.172  Doorbell Buffer Config:                Supported
00:10:36.172  Get LBA Status Capability:             Not Supported
00:10:36.172  Command & Feature Lockdown Capability: Not Supported
00:10:36.172  Abort Command Limit:                   4
00:10:36.172  Async Event Request Limit:             4
00:10:36.172  Number of Firmware Slots:              N/A
00:10:36.172  Firmware Slot 1 Read-Only:             N/A
00:10:36.172  Firmware Activation Without Reset:     N/A
00:10:36.172  Multiple Update Detection Support:     N/A
00:10:36.172  Firmware Update Granularity:           No Information Provided
00:10:36.172  Per-Namespace SMART Log:               Yes
00:10:36.172  Asymmetric Namespace Access Log Page:  Not Supported
00:10:36.172  Subsystem NQN:                         nqn.2019-08.org.qemu:12342
00:10:36.172  Command Effects Log Page:              Supported
00:10:36.172  Get Log Page Extended Data:            Supported
00:10:36.172  Telemetry Log Pages:                   Not Supported
00:10:36.172  Persistent Event Log Pages:            Not Supported
00:10:36.172  Supported Log Pages Log Page:          May Support
00:10:36.172  Commands Supported & Effects Log Page: Not Supported
00:10:36.172  Feature Identifiers & Effects Log Page:May Support
00:10:36.172  NVMe-MI Commands & Effects Log Page:   May Support
00:10:36.172  Data Area 4 for Telemetry Log:         Not Supported
00:10:36.172  Error Log Page Entries Supported:      1
00:10:36.172  Keep Alive:                            Not Supported
00:10:36.172  
00:10:36.172  NVM Command Set Attributes
00:10:36.172  ==========================
00:10:36.172  Submission Queue Entry Size
00:10:36.172    Max:                       64
00:10:36.172    Min:                       64
00:10:36.172  Completion Queue Entry Size
00:10:36.172    Max:                       16
00:10:36.172    Min:                       16
00:10:36.172  Number of Namespaces:        256
00:10:36.172  Compare Command:             Supported
00:10:36.172  Write Uncorrectable Command: Not Supported
00:10:36.172  Dataset Management Command:  Supported
00:10:36.172  Write Zeroes Command:        Supported
00:10:36.172  Set Features Save Field:     Supported
00:10:36.172  Reservations:                Not Supported
00:10:36.172  Timestamp:                   Supported
00:10:36.172  Copy:                        Supported
00:10:36.172  Volatile Write Cache:        Present
00:10:36.172  Atomic Write Unit (Normal):  1
00:10:36.172  Atomic Write Unit (PFail):   1
00:10:36.172  Atomic Compare & Write Unit: 1
00:10:36.172  Fused Compare & Write:       Not Supported
00:10:36.172  Scatter-Gather List
00:10:36.172    SGL Command Set:           Supported
00:10:36.172    SGL Keyed:                 Not Supported
00:10:36.172    SGL Bit Bucket Descriptor: Not Supported
00:10:36.172    SGL Metadata Pointer:      Not Supported
00:10:36.172    Oversized SGL:             Not Supported
00:10:36.172    SGL Metadata Address:      Not Supported
00:10:36.172    SGL Offset:                Not Supported
00:10:36.172    Transport SGL Data Block:  Not Supported
00:10:36.172  Replay Protected Memory Block:  Not Supported
00:10:36.172  
00:10:36.172  Firmware Slot Information
00:10:36.172  =========================
00:10:36.172  Active slot:                 1
00:10:36.172  Slot 1 Firmware Revision:    1.0
00:10:36.172  
00:10:36.172  
00:10:36.172  Commands Supported and Effects
00:10:36.172  ==============================
00:10:36.172  Admin Commands
00:10:36.172  --------------
00:10:36.172     Delete I/O Submission Queue (00h): Supported 
00:10:36.172     Create I/O Submission Queue (01h): Supported 
00:10:36.172                    Get Log Page (02h): Supported 
00:10:36.172     Delete I/O Completion Queue (04h): Supported 
00:10:36.172     Create I/O Completion Queue (05h): Supported 
00:10:36.172                        Identify (06h): Supported 
00:10:36.172                           Abort (08h): Supported 
00:10:36.172                    Set Features (09h): Supported 
00:10:36.172                    Get Features (0Ah): Supported 
00:10:36.172      Asynchronous Event Request (0Ch): Supported 
00:10:36.172            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:36.172                  Directive Send (19h): Supported 
00:10:36.172               Directive Receive (1Ah): Supported 
00:10:36.172       Virtualization Management (1Ch): Supported 
00:10:36.172          Doorbell Buffer Config (7Ch): Supported 
00:10:36.172                      Format NVM (80h): Supported LBA-Change 
00:10:36.172  I/O Commands
00:10:36.172  ------------
00:10:36.172                           Flush (00h): Supported LBA-Change 
00:10:36.172                           Write (01h): Supported LBA-Change 
00:10:36.172                            Read (02h): Supported 
00:10:36.172                         Compare (05h): Supported 
00:10:36.172                    Write Zeroes (08h): Supported LBA-Change 
00:10:36.172              Dataset Management (09h): Supported LBA-Change 
00:10:36.172                         Unknown (0Ch): Supported 
00:10:36.172                         Unknown (12h): Supported 
00:10:36.172                            Copy (19h): Supported LBA-Change 
00:10:36.172                         Unknown (1Dh): Supported LBA-Change 
00:10:36.172  
00:10:36.172  Error Log
00:10:36.172  =========
00:10:36.172  
00:10:36.172  Arbitration
00:10:36.172  ===========
00:10:36.172  Arbitration Burst:           no limit
00:10:36.172  
00:10:36.172  Power Management
00:10:36.172  ================
00:10:36.172  Number of Power States:          1
00:10:36.172  Current Power State:             Power State #0
00:10:36.172  Power State #0:
00:10:36.172    Max Power:                     25.00 W
00:10:36.172    Non-Operational State:         Operational
00:10:36.172    Entry Latency:                 16 microseconds
00:10:36.172    Exit Latency:                  4 microseconds
00:10:36.172    Relative Read Throughput:      0
00:10:36.173    Relative Read Latency:         0
00:10:36.173    Relative Write Throughput:     0
00:10:36.173    Relative Write Latency:        0
00:10:36.173    Idle Power:                     Not Reported
00:10:36.173    Active Power:                   Not Reported
00:10:36.173  Non-Operational Permissive Mode: Not Supported
00:10:36.173  
00:10:36.173  Health Information
00:10:36.173  ==================
00:10:36.173  Critical Warnings:
00:10:36.173    Available Spare Space:     OK
00:10:36.173    Temperature:               OK
00:10:36.173    Device Reliability:        OK
00:10:36.173    Read Only:                 No
00:10:36.173    Volatile Memory Backup:    OK
00:10:36.173  Current Temperature:         323 Kelvin (50 Celsius)
00:10:36.173  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:36.173  Available Spare:             0%
00:10:36.173  Available Spare Threshold:   0%
00:10:36.173  Life Percentage Used:        0%
00:10:36.173  Data Units Read:             2571
00:10:36.173  Data Units Written:          2358
00:10:36.173  Host Read Commands:          120584
00:10:36.173  Host Write Commands:         118853
00:10:36.173  Controller Busy Time:        0 minutes
00:10:36.173  Power Cycles:                0
00:10:36.173  Power On Hours:              0 hours
00:10:36.173  Unsafe Shutdowns:            0
00:10:36.173  Unrecoverable Media Errors:  0
00:10:36.173  Lifetime Error Log Entries:  0
00:10:36.173  Warning Temperature Time:    0 minutes
00:10:36.173  Critical Temperature Time:   0 minutes
00:10:36.173  
00:10:36.173  Number of Queues
00:10:36.173  ================
00:10:36.173  Number of I/O Submission Queues:      64
00:10:36.173  Number of I/O Completion Queues:      64
00:10:36.173  
00:10:36.173  ZNS Specific Controller Data
00:10:36.173  ============================
00:10:36.173  Zone Append Size Limit:      0
00:10:36.173  
00:10:36.173  
00:10:36.173  Active Namespaces
00:10:36.173  =================
00:10:36.173  Namespace ID:1
00:10:36.173  Error Recovery Timeout:                Unlimited
00:10:36.173  Command Set Identifier:                NVM (00h)
00:10:36.173  Deallocate:                            Supported
00:10:36.173  Deallocated/Unwritten Error:           Supported
00:10:36.173  Deallocated Read Value:                All 0x00
00:10:36.173  Deallocate in Write Zeroes:            Not Supported
00:10:36.173  Deallocated Guard Field:               0xFFFF
00:10:36.173  Flush:                                 Supported
00:10:36.173  Reservation:                           Not Supported
00:10:36.173  Namespace Sharing Capabilities:        Private
00:10:36.173  Size (in LBAs):                        1048576 (4GiB)
00:10:36.173  Capacity (in LBAs):                    1048576 (4GiB)
00:10:36.173  Utilization (in LBAs):                 1048576 (4GiB)
00:10:36.173  Thin Provisioning:                     Not Supported
00:10:36.173  Per-NS Atomic Units:                   No
00:10:36.173  Maximum Single Source Range Length:    128
00:10:36.173  Maximum Copy Length:                   128
00:10:36.173  Maximum Source Range Count:            128
00:10:36.173  NGUID/EUI64 Never Reused:              No
00:10:36.173  Namespace Write Protected:             No
00:10:36.173  Number of LBA Formats:                 8
00:10:36.173  Current LBA Format:                    LBA Format #04
00:10:36.173  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:36.173  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:36.173  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:36.173  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:36.173  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:36.173  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:36.173  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:36.173  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:36.173  
00:10:36.173  NVM Specific Namespace Data
00:10:36.173  ===========================
00:10:36.173  Logical Block Storage Tag Mask:               0
00:10:36.173  Protection Information Capabilities:
00:10:36.173    16b Guard Protection Information Storage Tag Support:  No
00:10:36.173    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:36.173    Storage Tag Check Read Support:                        No
00:10:36.173  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.173  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.173  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.173  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.173  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.173  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.173  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.173  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.173  Namespace ID:2
00:10:36.173  Error Recovery Timeout:                Unlimited
00:10:36.173  Command Set Identifier:                NVM (00h)
00:10:36.173  Deallocate:                            Supported
00:10:36.173  Deallocated/Unwritten Error:           Supported
00:10:36.173  Deallocated Read Value:                All 0x00
00:10:36.173  Deallocate in Write Zeroes:            Not Supported
00:10:36.173  Deallocated Guard Field:               0xFFFF
00:10:36.173  Flush:                                 Supported
00:10:36.173  Reservation:                           Not Supported
00:10:36.173  Namespace Sharing Capabilities:        Private
00:10:36.173  Size (in LBAs):                        1048576 (4GiB)
00:10:36.173  Capacity (in LBAs):                    1048576 (4GiB)
00:10:36.173  Utilization (in LBAs):                 1048576 (4GiB)
00:10:36.173  Thin Provisioning:                     Not Supported
00:10:36.173  Per-NS Atomic Units:                   No
00:10:36.173  Maximum Single Source Range Length:    128
00:10:36.173  Maximum Copy Length:                   128
00:10:36.173  Maximum Source Range Count:            128
00:10:36.173  NGUID/EUI64 Never Reused:              No
00:10:36.173  Namespace Write Protected:             No
00:10:36.173  Number of LBA Formats:                 8
00:10:36.173  Current LBA Format:                    LBA Format #04
00:10:36.173  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:36.173  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:36.173  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:36.173  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:36.173  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:36.173  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:36.174  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:36.174  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:36.174  
00:10:36.174  NVM Specific Namespace Data
00:10:36.174  ===========================
00:10:36.174  Logical Block Storage Tag Mask:               0
00:10:36.174  Protection Information Capabilities:
00:10:36.174    16b Guard Protection Information Storage Tag Support:  No
00:10:36.174    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:36.174    Storage Tag Check Read Support:                        No
00:10:36.174  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Namespace ID:3
00:10:36.174  Error Recovery Timeout:                Unlimited
00:10:36.174  Command Set Identifier:                NVM (00h)
00:10:36.174  Deallocate:                            Supported
00:10:36.174  Deallocated/Unwritten Error:           Supported
00:10:36.174  Deallocated Read Value:                All 0x00
00:10:36.174  Deallocate in Write Zeroes:            Not Supported
00:10:36.174  Deallocated Guard Field:               0xFFFF
00:10:36.174  Flush:                                 Supported
00:10:36.174  Reservation:                           Not Supported
00:10:36.174  Namespace Sharing Capabilities:        Private
00:10:36.174  Size (in LBAs):                        1048576 (4GiB)
00:10:36.174  Capacity (in LBAs):                    1048576 (4GiB)
00:10:36.174  Utilization (in LBAs):                 1048576 (4GiB)
00:10:36.174  Thin Provisioning:                     Not Supported
00:10:36.174  Per-NS Atomic Units:                   No
00:10:36.174  Maximum Single Source Range Length:    128
00:10:36.174  Maximum Copy Length:                   128
00:10:36.174  Maximum Source Range Count:            128
00:10:36.174  NGUID/EUI64 Never Reused:              No
00:10:36.174  Namespace Write Protected:             No
00:10:36.174  Number of LBA Formats:                 8
00:10:36.174  Current LBA Format:                    LBA Format #04
00:10:36.174  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:36.174  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:36.174  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:36.174  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:36.174  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:36.174  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:36.174  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:36.174  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:36.174  
00:10:36.174  NVM Specific Namespace Data
00:10:36.174  ===========================
00:10:36.174  Logical Block Storage Tag Mask:               0
00:10:36.174  Protection Information Capabilities:
00:10:36.174    16b Guard Protection Information Storage Tag Support:  No
00:10:36.174    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:36.174    Storage Tag Check Read Support:                        No
00:10:36.174  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.174   16:21:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:10:36.174   16:21:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0
00:10:36.435  =====================================================
00:10:36.435  NVMe Controller at 0000:00:13.0 [1b36:0010]
00:10:36.435  =====================================================
00:10:36.435  Controller Capabilities/Features
00:10:36.435  ================================
00:10:36.435  Vendor ID:                             1b36
00:10:36.435  Subsystem Vendor ID:                   1af4
00:10:36.435  Serial Number:                         12343
00:10:36.435  Model Number:                          QEMU NVMe Ctrl
00:10:36.435  Firmware Version:                      8.0.0
00:10:36.435  Recommended Arb Burst:                 6
00:10:36.435  IEEE OUI Identifier:                   00 54 52
00:10:36.435  Multi-path I/O
00:10:36.435    May have multiple subsystem ports:   No
00:10:36.435    May have multiple controllers:       Yes
00:10:36.435    Associated with SR-IOV VF:           No
00:10:36.435  Max Data Transfer Size:                524288
00:10:36.435  Max Number of Namespaces:              256
00:10:36.435  Max Number of I/O Queues:              64
00:10:36.435  NVMe Specification Version (VS):       1.4
00:10:36.435  NVMe Specification Version (Identify): 1.4
00:10:36.435  Maximum Queue Entries:                 2048
00:10:36.435  Contiguous Queues Required:            Yes
00:10:36.435  Arbitration Mechanisms Supported
00:10:36.435    Weighted Round Robin:                Not Supported
00:10:36.435    Vendor Specific:                     Not Supported
00:10:36.435  Reset Timeout:                         7500 ms
00:10:36.435  Doorbell Stride:                       4 bytes
00:10:36.435  NVM Subsystem Reset:                   Not Supported
00:10:36.435  Command Sets Supported
00:10:36.435    NVM Command Set:                     Supported
00:10:36.435  Boot Partition:                        Not Supported
00:10:36.435  Memory Page Size Minimum:              4096 bytes
00:10:36.435  Memory Page Size Maximum:              65536 bytes
00:10:36.435  Persistent Memory Region:              Not Supported
00:10:36.435  Optional Asynchronous Events Supported
00:10:36.435    Namespace Attribute Notices:         Supported
00:10:36.435    Firmware Activation Notices:         Not Supported
00:10:36.435    ANA Change Notices:                  Not Supported
00:10:36.435    PLE Aggregate Log Change Notices:    Not Supported
00:10:36.435    LBA Status Info Alert Notices:       Not Supported
00:10:36.435    EGE Aggregate Log Change Notices:    Not Supported
00:10:36.435    Normal NVM Subsystem Shutdown event: Not Supported
00:10:36.435    Zone Descriptor Change Notices:      Not Supported
00:10:36.435    Discovery Log Change Notices:        Not Supported
00:10:36.435  Controller Attributes
00:10:36.435    128-bit Host Identifier:             Not Supported
00:10:36.435    Non-Operational Permissive Mode:     Not Supported
00:10:36.435    NVM Sets:                            Not Supported
00:10:36.435    Read Recovery Levels:                Not Supported
00:10:36.435    Endurance Groups:                    Supported
00:10:36.435    Predictable Latency Mode:            Not Supported
00:10:36.435    Traffic Based Keep ALive:            Not Supported
00:10:36.435    Namespace Granularity:               Not Supported
00:10:36.435    SQ Associations:                     Not Supported
00:10:36.435    UUID List:                           Not Supported
00:10:36.435    Multi-Domain Subsystem:              Not Supported
00:10:36.435    Fixed Capacity Management:           Not Supported
00:10:36.435    Variable Capacity Management:        Not Supported
00:10:36.435    Delete Endurance Group:              Not Supported
00:10:36.435    Delete NVM Set:                      Not Supported
00:10:36.435    Extended LBA Formats Supported:      Supported
00:10:36.435    Flexible Data Placement Supported:   Supported
00:10:36.435  
00:10:36.435  Controller Memory Buffer Support
00:10:36.435  ================================
00:10:36.435  Supported:                             No
00:10:36.435  
00:10:36.435  Persistent Memory Region Support
00:10:36.435  ================================
00:10:36.435  Supported:                             No
00:10:36.435  
00:10:36.435  Admin Command Set Attributes
00:10:36.435  ============================
00:10:36.435  Security Send/Receive:                 Not Supported
00:10:36.435  Format NVM:                            Supported
00:10:36.435  Firmware Activate/Download:            Not Supported
00:10:36.435  Namespace Management:                  Supported
00:10:36.435  Device Self-Test:                      Not Supported
00:10:36.435  Directives:                            Supported
00:10:36.435  NVMe-MI:                               Not Supported
00:10:36.435  Virtualization Management:             Not Supported
00:10:36.435  Doorbell Buffer Config:                Supported
00:10:36.435  Get LBA Status Capability:             Not Supported
00:10:36.435  Command & Feature Lockdown Capability: Not Supported
00:10:36.435  Abort Command Limit:                   4
00:10:36.435  Async Event Request Limit:             4
00:10:36.435  Number of Firmware Slots:              N/A
00:10:36.435  Firmware Slot 1 Read-Only:             N/A
00:10:36.435  Firmware Activation Without Reset:     N/A
00:10:36.435  Multiple Update Detection Support:     N/A
00:10:36.435  Firmware Update Granularity:           No Information Provided
00:10:36.435  Per-Namespace SMART Log:               Yes
00:10:36.435  Asymmetric Namespace Access Log Page:  Not Supported
00:10:36.435  Subsystem NQN:                         nqn.2019-08.org.qemu:fdp-subsys3
00:10:36.435  Command Effects Log Page:              Supported
00:10:36.435  Get Log Page Extended Data:            Supported
00:10:36.435  Telemetry Log Pages:                   Not Supported
00:10:36.435  Persistent Event Log Pages:            Not Supported
00:10:36.435  Supported Log Pages Log Page:          May Support
00:10:36.435  Commands Supported & Effects Log Page: Not Supported
00:10:36.435  Feature Identifiers & Effects Log Page:May Support
00:10:36.435  NVMe-MI Commands & Effects Log Page:   May Support
00:10:36.435  Data Area 4 for Telemetry Log:         Not Supported
00:10:36.435  Error Log Page Entries Supported:      1
00:10:36.435  Keep Alive:                            Not Supported
00:10:36.435  
00:10:36.435  NVM Command Set Attributes
00:10:36.435  ==========================
00:10:36.435  Submission Queue Entry Size
00:10:36.435    Max:                       64
00:10:36.435    Min:                       64
00:10:36.435  Completion Queue Entry Size
00:10:36.435    Max:                       16
00:10:36.435    Min:                       16
00:10:36.435  Number of Namespaces:        256
00:10:36.435  Compare Command:             Supported
00:10:36.435  Write Uncorrectable Command: Not Supported
00:10:36.435  Dataset Management Command:  Supported
00:10:36.435  Write Zeroes Command:        Supported
00:10:36.435  Set Features Save Field:     Supported
00:10:36.435  Reservations:                Not Supported
00:10:36.435  Timestamp:                   Supported
00:10:36.435  Copy:                        Supported
00:10:36.435  Volatile Write Cache:        Present
00:10:36.436  Atomic Write Unit (Normal):  1
00:10:36.436  Atomic Write Unit (PFail):   1
00:10:36.436  Atomic Compare & Write Unit: 1
00:10:36.436  Fused Compare & Write:       Not Supported
00:10:36.436  Scatter-Gather List
00:10:36.436    SGL Command Set:           Supported
00:10:36.436    SGL Keyed:                 Not Supported
00:10:36.436    SGL Bit Bucket Descriptor: Not Supported
00:10:36.436    SGL Metadata Pointer:      Not Supported
00:10:36.436    Oversized SGL:             Not Supported
00:10:36.436    SGL Metadata Address:      Not Supported
00:10:36.436    SGL Offset:                Not Supported
00:10:36.436    Transport SGL Data Block:  Not Supported
00:10:36.436  Replay Protected Memory Block:  Not Supported
00:10:36.436  
00:10:36.436  Firmware Slot Information
00:10:36.436  =========================
00:10:36.436  Active slot:                 1
00:10:36.436  Slot 1 Firmware Revision:    1.0
00:10:36.436  
00:10:36.436  
00:10:36.436  Commands Supported and Effects
00:10:36.436  ==============================
00:10:36.436  Admin Commands
00:10:36.436  --------------
00:10:36.436     Delete I/O Submission Queue (00h): Supported 
00:10:36.436     Create I/O Submission Queue (01h): Supported 
00:10:36.436                    Get Log Page (02h): Supported 
00:10:36.436     Delete I/O Completion Queue (04h): Supported 
00:10:36.436     Create I/O Completion Queue (05h): Supported 
00:10:36.436                        Identify (06h): Supported 
00:10:36.436                           Abort (08h): Supported 
00:10:36.436                    Set Features (09h): Supported 
00:10:36.436                    Get Features (0Ah): Supported 
00:10:36.436      Asynchronous Event Request (0Ch): Supported 
00:10:36.436            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:36.436                  Directive Send (19h): Supported 
00:10:36.436               Directive Receive (1Ah): Supported 
00:10:36.436       Virtualization Management (1Ch): Supported 
00:10:36.436          Doorbell Buffer Config (7Ch): Supported 
00:10:36.436                      Format NVM (80h): Supported LBA-Change 
00:10:36.436  I/O Commands
00:10:36.436  ------------
00:10:36.436                           Flush (00h): Supported LBA-Change 
00:10:36.436                           Write (01h): Supported LBA-Change 
00:10:36.436                            Read (02h): Supported 
00:10:36.436                         Compare (05h): Supported 
00:10:36.436                    Write Zeroes (08h): Supported LBA-Change 
00:10:36.436              Dataset Management (09h): Supported LBA-Change 
00:10:36.436                         Unknown (0Ch): Supported 
00:10:36.436                         Unknown (12h): Supported 
00:10:36.436                            Copy (19h): Supported LBA-Change 
00:10:36.436                         Unknown (1Dh): Supported LBA-Change 
00:10:36.436  
00:10:36.436  Error Log
00:10:36.436  =========
00:10:36.436  
00:10:36.436  Arbitration
00:10:36.436  ===========
00:10:36.436  Arbitration Burst:           no limit
00:10:36.436  
00:10:36.436  Power Management
00:10:36.436  ================
00:10:36.436  Number of Power States:          1
00:10:36.436  Current Power State:             Power State #0
00:10:36.436  Power State #0:
00:10:36.436    Max Power:                     25.00 W
00:10:36.436    Non-Operational State:         Operational
00:10:36.436    Entry Latency:                 16 microseconds
00:10:36.436    Exit Latency:                  4 microseconds
00:10:36.436    Relative Read Throughput:      0
00:10:36.436    Relative Read Latency:         0
00:10:36.436    Relative Write Throughput:     0
00:10:36.436    Relative Write Latency:        0
00:10:36.436    Idle Power:                     Not Reported
00:10:36.436    Active Power:                   Not Reported
00:10:36.436  Non-Operational Permissive Mode: Not Supported
00:10:36.436  
00:10:36.436  Health Information
00:10:36.436  ==================
00:10:36.436  Critical Warnings:
00:10:36.436    Available Spare Space:     OK
00:10:36.436    Temperature:               OK
00:10:36.436    Device Reliability:        OK
00:10:36.436    Read Only:                 No
00:10:36.436    Volatile Memory Backup:    OK
00:10:36.436  Current Temperature:         323 Kelvin (50 Celsius)
00:10:36.436  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:36.436  Available Spare:             0%
00:10:36.436  Available Spare Threshold:   0%
00:10:36.436  Life Percentage Used:        0%
00:10:36.436  Data Units Read:             918
00:10:36.436  Data Units Written:          847
00:10:36.436  Host Read Commands:          40790
00:10:36.436  Host Write Commands:         40213
00:10:36.436  Controller Busy Time:        0 minutes
00:10:36.436  Power Cycles:                0
00:10:36.436  Power On Hours:              0 hours
00:10:36.436  Unsafe Shutdowns:            0
00:10:36.436  Unrecoverable Media Errors:  0
00:10:36.436  Lifetime Error Log Entries:  0
00:10:36.436  Warning Temperature Time:    0 minutes
00:10:36.436  Critical Temperature Time:   0 minutes
00:10:36.436  
00:10:36.436  Number of Queues
00:10:36.436  ================
00:10:36.436  Number of I/O Submission Queues:      64
00:10:36.436  Number of I/O Completion Queues:      64
00:10:36.436  
00:10:36.436  ZNS Specific Controller Data
00:10:36.436  ============================
00:10:36.436  Zone Append Size Limit:      0
00:10:36.436  
00:10:36.436  
00:10:36.436  Active Namespaces
00:10:36.436  =================
00:10:36.436  Namespace ID:1
00:10:36.436  Error Recovery Timeout:                Unlimited
00:10:36.436  Command Set Identifier:                NVM (00h)
00:10:36.436  Deallocate:                            Supported
00:10:36.436  Deallocated/Unwritten Error:           Supported
00:10:36.436  Deallocated Read Value:                All 0x00
00:10:36.436  Deallocate in Write Zeroes:            Not Supported
00:10:36.436  Deallocated Guard Field:               0xFFFF
00:10:36.436  Flush:                                 Supported
00:10:36.436  Reservation:                           Not Supported
00:10:36.436  Namespace Sharing Capabilities:        Multiple Controllers
00:10:36.436  Size (in LBAs):                        262144 (1GiB)
00:10:36.436  Capacity (in LBAs):                    262144 (1GiB)
00:10:36.436  Utilization (in LBAs):                 262144 (1GiB)
00:10:36.436  Thin Provisioning:                     Not Supported
00:10:36.436  Per-NS Atomic Units:                   No
00:10:36.436  Maximum Single Source Range Length:    128
00:10:36.436  Maximum Copy Length:                   128
00:10:36.436  Maximum Source Range Count:            128
00:10:36.436  NGUID/EUI64 Never Reused:              No
00:10:36.436  Namespace Write Protected:             No
00:10:36.436  Endurance group ID:                    1
00:10:36.436  Number of LBA Formats:                 8
00:10:36.436  Current LBA Format:                    LBA Format #04
00:10:36.436  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:36.436  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:36.436  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:36.436  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:36.436  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:36.436  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:36.436  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:36.436  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:36.436  
00:10:36.436  Get Feature FDP:
00:10:36.436  ================
00:10:36.436    Enabled:                 Yes
00:10:36.436    FDP configuration index: 0
00:10:36.436  
00:10:36.436  FDP configurations log page
00:10:36.436  ===========================
00:10:36.436  Number of FDP configurations:         1
00:10:36.436  Version:                              0
00:10:36.436  Size:                                 112
00:10:36.436  FDP Configuration Descriptor:         0
00:10:36.436    Descriptor Size:                    96
00:10:36.436    Reclaim Group Identifier format:    2
00:10:36.436    FDP Volatile Write Cache:           Not Present
00:10:36.436    FDP Configuration:                  Valid
00:10:36.436    Vendor Specific Size:               0
00:10:36.436    Number of Reclaim Groups:           2
00:10:36.436    Number of Recalim Unit Handles:     8
00:10:36.436    Max Placement Identifiers:          128
00:10:36.436    Number of Namespaces Suppprted:     256
00:10:36.436    Reclaim unit Nominal Size:          6000000 bytes
00:10:36.436    Estimated Reclaim Unit Time Limit:  Not Reported
00:10:36.436      RUH Desc #000:          RUH Type: Initially Isolated
00:10:36.436      RUH Desc #001:          RUH Type: Initially Isolated
00:10:36.436      RUH Desc #002:          RUH Type: Initially Isolated
00:10:36.436      RUH Desc #003:          RUH Type: Initially Isolated
00:10:36.436      RUH Desc #004:          RUH Type: Initially Isolated
00:10:36.436      RUH Desc #005:          RUH Type: Initially Isolated
00:10:36.436      RUH Desc #006:          RUH Type: Initially Isolated
00:10:36.436      RUH Desc #007:          RUH Type: Initially Isolated
00:10:36.436  
00:10:36.436  FDP reclaim unit handle usage log page
00:10:36.436  ======================================
00:10:36.436  Number of Reclaim Unit Handles:       8
00:10:36.436    RUH Usage Desc #000:   RUH Attributes: Controller Specified
00:10:36.436    RUH Usage Desc #001:   RUH Attributes: Unused
00:10:36.436    RUH Usage Desc #002:   RUH Attributes: Unused
00:10:36.436    RUH Usage Desc #003:   RUH Attributes: Unused
00:10:36.436    RUH Usage Desc #004:   RUH Attributes: Unused
00:10:36.436    RUH Usage Desc #005:   RUH Attributes: Unused
00:10:36.436    RUH Usage Desc #006:   RUH Attributes: Unused
00:10:36.436    RUH Usage Desc #007:   RUH Attributes: Unused
00:10:36.436  
00:10:36.436  FDP statistics log page
00:10:36.436  =======================
00:10:36.436  Host bytes with metadata written:  541761536
00:10:36.436  Media bytes with metadata written: 541818880
00:10:36.436  Media bytes erased:                0
00:10:36.436  
00:10:36.436  FDP events log page
00:10:36.436  ===================
00:10:36.436  Number of FDP events:              0
00:10:36.436  
00:10:36.436  NVM Specific Namespace Data
00:10:36.436  ===========================
00:10:36.436  Logical Block Storage Tag Mask:               0
00:10:36.436  Protection Information Capabilities:
00:10:36.436    16b Guard Protection Information Storage Tag Support:  No
00:10:36.436    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:36.436    Storage Tag Check Read Support:                        No
00:10:36.436  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.436  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.437  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.437  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.437  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.437  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.437  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.437  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:36.437  
00:10:36.437  real	0m1.662s
00:10:36.437  user	0m0.590s
00:10:36.437  sys	0m0.855s
00:10:36.437   16:21:05 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:36.437  ************************************
00:10:36.437  END TEST nvme_identify
00:10:36.437   16:21:05 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x
00:10:36.437  ************************************
00:10:36.437   16:21:05 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:10:36.437   16:21:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:36.437   16:21:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:36.437   16:21:05 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:36.437  ************************************
00:10:36.437  START TEST nvme_perf
00:10:36.437  ************************************
00:10:36.437   16:21:05 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf
00:10:36.437   16:21:05 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:10:37.817  Initializing NVMe Controllers
00:10:37.817  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:10:37.817  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:10:37.817  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:10:37.817  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:10:37.817  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:10:37.817  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:10:37.817  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:10:37.817  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:10:37.817  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:10:37.817  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:10:37.817  Initialization complete. Launching workers.
00:10:37.817  ========================================================
00:10:37.817                                                                             Latency(us)
00:10:37.817  Device Information                     :       IOPS      MiB/s    Average        min        max
00:10:37.817  PCIE (0000:00:10.0) NSID 1 from core  0:   13615.62     159.56    9422.87    7989.75   51491.41
00:10:37.817  PCIE (0000:00:11.0) NSID 1 from core  0:   13615.62     159.56    9409.19    8061.22   49603.08
00:10:37.817  PCIE (0000:00:13.0) NSID 1 from core  0:   13615.62     159.56    9393.89    7986.03   48460.29
00:10:37.817  PCIE (0000:00:12.0) NSID 1 from core  0:   13615.62     159.56    9378.59    8033.66   46635.42
00:10:37.817  PCIE (0000:00:12.0) NSID 2 from core  0:   13615.62     159.56    9363.45    8009.35   44683.46
00:10:37.817  PCIE (0000:00:12.0) NSID 3 from core  0:   13679.54     160.31    9304.88    8010.14   37232.89
00:10:37.817  ========================================================
00:10:37.817  Total                                  :   81757.65     958.10    9378.75    7986.03   51491.41
00:10:37.817  
00:10:37.817  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:10:37.817  =================================================================================
00:10:37.817    1.00000% :  8211.740us
00:10:37.817   10.00000% :  8422.297us
00:10:37.817   25.00000% :  8632.855us
00:10:37.817   50.00000% :  8948.691us
00:10:37.817   75.00000% :  9211.888us
00:10:37.817   90.00000% :  9527.724us
00:10:37.817   95.00000% : 10054.117us
00:10:37.817   98.00000% : 15370.692us
00:10:37.817   99.00000% : 18739.611us
00:10:37.817   99.50000% : 44427.618us
00:10:37.817   99.90000% : 51165.455us
00:10:37.817   99.99000% : 51586.570us
00:10:37.817   99.99900% : 51586.570us
00:10:37.817   99.99990% : 51586.570us
00:10:37.817   99.99999% : 51586.570us
00:10:37.817  
00:10:37.817  Summary latency data for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:10:37.817  =================================================================================
00:10:37.817    1.00000% :  8264.379us
00:10:37.817   10.00000% :  8474.937us
00:10:37.817   25.00000% :  8685.494us
00:10:37.817   50.00000% :  8896.051us
00:10:37.817   75.00000% :  9159.248us
00:10:37.817   90.00000% :  9475.084us
00:10:37.817   95.00000% :  9948.839us
00:10:37.817   98.00000% : 14844.299us
00:10:37.817   99.00000% : 19371.284us
00:10:37.817   99.50000% : 42743.158us
00:10:37.817   99.90000% : 49270.439us
00:10:37.817   99.99000% : 49691.553us
00:10:37.817   99.99900% : 49691.553us
00:10:37.817   99.99990% : 49691.553us
00:10:37.817   99.99999% : 49691.553us
00:10:37.817  
00:10:37.817  Summary latency data for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:10:37.817  =================================================================================
00:10:37.817    1.00000% :  8264.379us
00:10:37.817   10.00000% :  8474.937us
00:10:37.817   25.00000% :  8685.494us
00:10:37.817   50.00000% :  8896.051us
00:10:37.817   75.00000% :  9159.248us
00:10:37.817   90.00000% :  9475.084us
00:10:37.817   95.00000% :  9948.839us
00:10:37.817   98.00000% : 15581.250us
00:10:37.817   99.00000% : 20634.628us
00:10:37.817   99.50000% : 41479.814us
00:10:37.817   99.90000% : 48217.651us
00:10:37.817   99.99000% : 48428.209us
00:10:37.817   99.99900% : 48638.766us
00:10:37.817   99.99990% : 48638.766us
00:10:37.817   99.99999% : 48638.766us
00:10:37.817  
00:10:37.817  Summary latency data for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:10:37.817  =================================================================================
00:10:37.817    1.00000% :  8264.379us
00:10:37.817   10.00000% :  8474.937us
00:10:37.817   25.00000% :  8685.494us
00:10:37.817   50.00000% :  8896.051us
00:10:37.817   75.00000% :  9159.248us
00:10:37.817   90.00000% :  9475.084us
00:10:37.817   95.00000% : 10001.478us
00:10:37.817   98.00000% : 16002.365us
00:10:37.817   99.00000% : 20002.956us
00:10:37.817   99.50000% : 39584.797us
00:10:37.817   99.90000% : 46322.635us
00:10:37.817   99.99000% : 46743.749us
00:10:37.817   99.99900% : 46743.749us
00:10:37.817   99.99990% : 46743.749us
00:10:37.817   99.99999% : 46743.749us
00:10:37.817  
00:10:37.817  Summary latency data for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:10:37.817  =================================================================================
00:10:37.817    1.00000% :  8264.379us
00:10:37.817   10.00000% :  8474.937us
00:10:37.817   25.00000% :  8685.494us
00:10:37.817   50.00000% :  8896.051us
00:10:37.817   75.00000% :  9159.248us
00:10:37.817   90.00000% :  9475.084us
00:10:37.817   95.00000% : 10212.035us
00:10:37.817   98.00000% : 16212.922us
00:10:37.817   99.00000% : 19476.562us
00:10:37.817   99.50000% : 37689.780us
00:10:37.817   99.90000% : 44427.618us
00:10:37.817   99.99000% : 44848.733us
00:10:37.817   99.99900% : 44848.733us
00:10:37.817   99.99990% : 44848.733us
00:10:37.817   99.99999% : 44848.733us
00:10:37.817  
00:10:37.817  Summary latency data for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:10:37.817  =================================================================================
00:10:37.817    1.00000% :  8264.379us
00:10:37.817   10.00000% :  8474.937us
00:10:37.817   25.00000% :  8685.494us
00:10:37.817   50.00000% :  8896.051us
00:10:37.817   75.00000% :  9159.248us
00:10:37.817   90.00000% :  9475.084us
00:10:37.817   95.00000% : 10422.593us
00:10:37.817   98.00000% : 15897.086us
00:10:37.817   99.00000% : 18950.169us
00:10:37.817   99.50000% : 30741.385us
00:10:37.817   99.90000% : 37058.108us
00:10:37.817   99.99000% : 37268.665us
00:10:37.817   99.99900% : 37268.665us
00:10:37.817   99.99990% : 37268.665us
00:10:37.817   99.99999% : 37268.665us
00:10:37.817  
00:10:37.817  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:10:37.817  ==============================================================================
00:10:37.817         Range in us     Cumulative    IO count
00:10:37.817   7948.543 -  8001.182:    0.0220%  (        3)
00:10:37.817   8001.182 -  8053.822:    0.1394%  (       16)
00:10:37.817   8053.822 -  8106.461:    0.3741%  (       32)
00:10:37.817   8106.461 -  8159.100:    0.9903%  (       84)
00:10:37.817   8159.100 -  8211.740:    2.1273%  (      155)
00:10:37.817   8211.740 -  8264.379:    3.5431%  (      193)
00:10:37.817   8264.379 -  8317.018:    5.5825%  (      278)
00:10:37.817   8317.018 -  8369.658:    8.1206%  (      346)
00:10:37.817   8369.658 -  8422.297:   11.0402%  (      398)
00:10:37.817   8422.297 -  8474.937:   14.6494%  (      492)
00:10:37.817   8474.937 -  8527.576:   18.3979%  (      511)
00:10:37.817   8527.576 -  8580.215:   22.5792%  (      570)
00:10:37.817   8580.215 -  8632.855:   26.9366%  (      594)
00:10:37.817   8632.855 -  8685.494:   31.2133%  (      583)
00:10:37.817   8685.494 -  8738.133:   35.5854%  (      596)
00:10:37.817   8738.133 -  8790.773:   40.2435%  (      635)
00:10:37.817   8790.773 -  8843.412:   44.7843%  (      619)
00:10:37.817   8843.412 -  8896.051:   49.3911%  (      628)
00:10:37.817   8896.051 -  8948.691:   54.2033%  (      656)
00:10:37.817   8948.691 -  9001.330:   58.8982%  (      640)
00:10:37.817   9001.330 -  9053.969:   63.5270%  (      631)
00:10:37.817   9053.969 -  9106.609:   68.1265%  (      627)
00:10:37.817   9106.609 -  9159.248:   72.3812%  (      580)
00:10:37.817   9159.248 -  9211.888:   76.5038%  (      562)
00:10:37.817   9211.888 -  9264.527:   79.9663%  (      472)
00:10:37.817   9264.527 -  9317.166:   83.0766%  (      424)
00:10:37.817   9317.166 -  9369.806:   85.6367%  (      349)
00:10:37.817   9369.806 -  9422.445:   87.7934%  (      294)
00:10:37.817   9422.445 -  9475.084:   89.4806%  (      230)
00:10:37.817   9475.084 -  9527.724:   90.8817%  (      191)
00:10:37.817   9527.724 -  9580.363:   91.8060%  (      126)
00:10:37.817   9580.363 -  9633.002:   92.5983%  (      108)
00:10:37.817   9633.002 -  9685.642:   93.2218%  (       85)
00:10:37.817   9685.642 -  9738.281:   93.6987%  (       65)
00:10:37.817   9738.281 -  9790.920:   94.0874%  (       53)
00:10:37.817   9790.920 -  9843.560:   94.4396%  (       48)
00:10:37.817   9843.560 -  9896.199:   94.6963%  (       35)
00:10:37.817   9896.199 -  9948.839:   94.8870%  (       26)
00:10:37.817   9948.839 - 10001.478:   94.9751%  (       12)
00:10:37.817  10001.478 - 10054.117:   95.0778%  (       14)
00:10:37.817  10054.117 - 10106.757:   95.1951%  (       16)
00:10:37.817  10106.757 - 10159.396:   95.2685%  (       10)
00:10:37.817  10159.396 - 10212.035:   95.3125%  (        6)
00:10:37.817  10212.035 - 10264.675:   95.3418%  (        4)
00:10:37.817  10264.675 - 10317.314:   95.3712%  (        4)
00:10:37.817  10317.314 - 10369.953:   95.4005%  (        4)
00:10:37.817  10369.953 - 10422.593:   95.4592%  (        8)
00:10:37.817  10422.593 - 10475.232:   95.5106%  (        7)
00:10:37.817  10475.232 - 10527.871:   95.5619%  (        7)
00:10:37.817  10527.871 - 10580.511:   95.6206%  (        8)
00:10:37.817  10580.511 - 10633.150:   95.6793%  (        8)
00:10:37.817  10633.150 - 10685.790:   95.7306%  (        7)
00:10:37.817  10685.790 - 10738.429:   95.7746%  (        6)
00:10:37.817  10738.429 - 10791.068:   95.8260%  (        7)
00:10:37.817  10791.068 - 10843.708:   95.8920%  (        9)
00:10:37.817  10843.708 - 10896.347:   95.9214%  (        4)
00:10:37.817  10896.347 - 10948.986:   95.9800%  (        8)
00:10:37.817  10948.986 - 11001.626:   96.0461%  (        9)
00:10:37.817  11001.626 - 11054.265:   96.1048%  (        8)
00:10:37.817  11054.265 - 11106.904:   96.1634%  (        8)
00:10:37.817  11106.904 - 11159.544:   96.2148%  (        7)
00:10:37.817  11159.544 - 11212.183:   96.2735%  (        8)
00:10:37.817  11212.183 - 11264.822:   96.3175%  (        6)
00:10:37.817  11264.822 - 11317.462:   96.3688%  (        7)
00:10:37.817  11317.462 - 11370.101:   96.4349%  (        9)
00:10:37.817  11370.101 - 11422.741:   96.4642%  (        4)
00:10:37.817  11422.741 - 11475.380:   96.5082%  (        6)
00:10:37.817  11475.380 - 11528.019:   96.5229%  (        2)
00:10:37.817  11528.019 - 11580.659:   96.5596%  (        5)
00:10:37.817  11580.659 - 11633.298:   96.5669%  (        1)
00:10:37.817  11633.298 - 11685.937:   96.5962%  (        4)
00:10:37.817  11685.937 - 11738.577:   96.6256%  (        4)
00:10:37.817  11738.577 - 11791.216:   96.6476%  (        3)
00:10:37.817  11791.216 - 11843.855:   96.6843%  (        5)
00:10:37.817  11843.855 - 11896.495:   96.6989%  (        2)
00:10:37.817  11896.495 - 11949.134:   96.7283%  (        4)
00:10:37.817  11949.134 - 12001.773:   96.7650%  (        5)
00:10:37.817  12001.773 - 12054.413:   96.7870%  (        3)
00:10:37.817  12054.413 - 12107.052:   96.8090%  (        3)
00:10:37.817  12107.052 - 12159.692:   96.8383%  (        4)
00:10:37.817  12159.692 - 12212.331:   96.8530%  (        2)
00:10:37.817  12212.331 - 12264.970:   96.8677%  (        2)
00:10:37.817  12264.970 - 12317.610:   96.8823%  (        2)
00:10:37.817  12317.610 - 12370.249:   96.9337%  (        7)
00:10:37.817  12370.249 - 12422.888:   96.9704%  (        5)
00:10:37.817  12422.888 - 12475.528:   97.0070%  (        5)
00:10:37.817  12475.528 - 12528.167:   97.0437%  (        5)
00:10:37.817  12528.167 - 12580.806:   97.0731%  (        4)
00:10:37.817  12580.806 - 12633.446:   97.1244%  (        7)
00:10:37.817  12633.446 - 12686.085:   97.1611%  (        5)
00:10:37.817  12686.085 - 12738.724:   97.2051%  (        6)
00:10:37.817  12738.724 - 12791.364:   97.2198%  (        2)
00:10:37.817  12791.364 - 12844.003:   97.2711%  (        7)
00:10:37.817  12844.003 - 12896.643:   97.2931%  (        3)
00:10:37.817  12896.643 - 12949.282:   97.3445%  (        7)
00:10:37.817  12949.282 - 13001.921:   97.3738%  (        4)
00:10:37.817  13001.921 - 13054.561:   97.4178%  (        6)
00:10:37.817  13054.561 - 13107.200:   97.4472%  (        4)
00:10:37.817  13107.200 - 13159.839:   97.4765%  (        4)
00:10:37.817  13159.839 - 13212.479:   97.5132%  (        5)
00:10:37.817  13212.479 - 13265.118:   97.5572%  (        6)
00:10:37.817  13265.118 - 13317.757:   97.5939%  (        5)
00:10:37.817  13317.757 - 13370.397:   97.6086%  (        2)
00:10:37.817  13370.397 - 13423.036:   97.6159%  (        1)
00:10:37.817  13423.036 - 13475.676:   97.6379%  (        3)
00:10:37.817  13475.676 - 13580.954:   97.6526%  (        2)
00:10:37.817  14002.069 - 14107.348:   97.6599%  (        1)
00:10:37.817  14107.348 - 14212.627:   97.6893%  (        4)
00:10:37.817  14212.627 - 14317.905:   97.7186%  (        4)
00:10:37.817  14317.905 - 14423.184:   97.7479%  (        4)
00:10:37.817  14423.184 - 14528.463:   97.7846%  (        5)
00:10:37.817  14528.463 - 14633.741:   97.8066%  (        3)
00:10:37.817  14633.741 - 14739.020:   97.8360%  (        4)
00:10:37.817  14739.020 - 14844.299:   97.8653%  (        4)
00:10:37.817  14844.299 - 14949.578:   97.9093%  (        6)
00:10:37.817  14949.578 - 15054.856:   97.9313%  (        3)
00:10:37.817  15054.856 - 15160.135:   97.9607%  (        4)
00:10:37.817  15160.135 - 15265.414:   97.9900%  (        4)
00:10:37.817  15265.414 - 15370.692:   98.0194%  (        4)
00:10:37.817  15370.692 - 15475.971:   98.0414%  (        3)
00:10:37.817  15475.971 - 15581.250:   98.0707%  (        4)
00:10:37.817  15581.250 - 15686.529:   98.1001%  (        4)
00:10:37.817  15686.529 - 15791.807:   98.1221%  (        3)
00:10:37.817  17160.431 - 17265.709:   98.1367%  (        2)
00:10:37.817  17265.709 - 17370.988:   98.1734%  (        5)
00:10:37.817  17370.988 - 17476.267:   98.2028%  (        4)
00:10:37.817  17476.267 - 17581.545:   98.2541%  (        7)
00:10:37.817  17581.545 - 17686.824:   98.3348%  (       11)
00:10:37.817  17686.824 - 17792.103:   98.4082%  (       10)
00:10:37.817  17792.103 - 17897.382:   98.4815%  (       10)
00:10:37.817  17897.382 - 18002.660:   98.5622%  (       11)
00:10:37.817  18002.660 - 18107.939:   98.6209%  (        8)
00:10:37.817  18107.939 - 18213.218:   98.7089%  (       12)
00:10:37.817  18213.218 - 18318.496:   98.7676%  (        8)
00:10:37.817  18318.496 - 18423.775:   98.8410%  (       10)
00:10:37.817  18423.775 - 18529.054:   98.9143%  (       10)
00:10:37.817  18529.054 - 18634.333:   98.9877%  (       10)
00:10:37.817  18634.333 - 18739.611:   99.0610%  (       10)
00:10:37.817  42532.601 - 42743.158:   99.0684%  (        1)
00:10:37.817  42743.158 - 42953.716:   99.1271%  (        8)
00:10:37.817  42953.716 - 43164.273:   99.1711%  (        6)
00:10:37.817  43164.273 - 43374.831:   99.2224%  (        7)
00:10:37.817  43374.831 - 43585.388:   99.2811%  (        8)
00:10:37.817  43585.388 - 43795.945:   99.3325%  (        7)
00:10:37.817  43795.945 - 44006.503:   99.3911%  (        8)
00:10:37.817  44006.503 - 44217.060:   99.4498%  (        8)
00:10:37.817  44217.060 - 44427.618:   99.5012%  (        7)
00:10:37.817  44427.618 - 44638.175:   99.5305%  (        4)
00:10:37.817  49480.996 - 49691.553:   99.5745%  (        6)
00:10:37.817  49691.553 - 49902.111:   99.6185%  (        6)
00:10:37.817  49902.111 - 50112.668:   99.6772%  (        8)
00:10:37.817  50112.668 - 50323.226:   99.7286%  (        7)
00:10:37.817  50323.226 - 50533.783:   99.7726%  (        6)
00:10:37.817  50533.783 - 50744.341:   99.8313%  (        8)
00:10:37.817  50744.341 - 50954.898:   99.8900%  (        8)
00:10:37.817  50954.898 - 51165.455:   99.9413%  (        7)
00:10:37.817  51165.455 - 51376.013:   99.9853%  (        6)
00:10:37.817  51376.013 - 51586.570:  100.0000%  (        2)
00:10:37.817  
00:10:37.817  Latency histogram for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:10:37.817  ==============================================================================
00:10:37.817         Range in us     Cumulative    IO count
00:10:37.817   8053.822 -  8106.461:    0.0293%  (        4)
00:10:37.817   8106.461 -  8159.100:    0.3008%  (       37)
00:10:37.817   8159.100 -  8211.740:    0.7629%  (       63)
00:10:37.817   8211.740 -  8264.379:    1.7972%  (      141)
00:10:37.817   8264.379 -  8317.018:    2.9710%  (      160)
00:10:37.817   8317.018 -  8369.658:    5.0176%  (      279)
00:10:37.817   8369.658 -  8422.297:    7.7171%  (      368)
00:10:37.817   8422.297 -  8474.937:   11.0035%  (      448)
00:10:37.817   8474.937 -  8527.576:   14.9208%  (      534)
00:10:37.817   8527.576 -  8580.215:   19.0214%  (      559)
00:10:37.817   8580.215 -  8632.855:   23.7749%  (      648)
00:10:37.817   8632.855 -  8685.494:   28.8879%  (      697)
00:10:37.817   8685.494 -  8738.133:   34.1109%  (      712)
00:10:37.817   8738.133 -  8790.773:   39.3633%  (      716)
00:10:37.817   8790.773 -  8843.412:   44.7256%  (      731)
00:10:37.817   8843.412 -  8896.051:   50.0880%  (      731)
00:10:37.817   8896.051 -  8948.691:   55.6338%  (      756)
00:10:37.817   8948.691 -  9001.330:   61.1502%  (      752)
00:10:37.817   9001.330 -  9053.969:   66.4466%  (      722)
00:10:37.817   9053.969 -  9106.609:   71.3175%  (      664)
00:10:37.817   9106.609 -  9159.248:   75.7996%  (      611)
00:10:37.817   9159.248 -  9211.888:   79.6802%  (      529)
00:10:37.817   9211.888 -  9264.527:   82.9445%  (      445)
00:10:37.817   9264.527 -  9317.166:   85.6587%  (      370)
00:10:37.817   9317.166 -  9369.806:   87.7714%  (      288)
00:10:37.817   9369.806 -  9422.445:   89.5540%  (      243)
00:10:37.817   9422.445 -  9475.084:   90.8304%  (      174)
00:10:37.817   9475.084 -  9527.724:   91.7840%  (      130)
00:10:37.817   9527.724 -  9580.363:   92.5616%  (      106)
00:10:37.817   9580.363 -  9633.002:   93.1925%  (       86)
00:10:37.817   9633.002 -  9685.642:   93.7573%  (       77)
00:10:37.817   9685.642 -  9738.281:   94.2048%  (       61)
00:10:37.817   9738.281 -  9790.920:   94.5056%  (       41)
00:10:37.817   9790.920 -  9843.560:   94.7330%  (       31)
00:10:37.817   9843.560 -  9896.199:   94.8797%  (       20)
00:10:37.817   9896.199 -  9948.839:   95.0191%  (       19)
00:10:37.817   9948.839 - 10001.478:   95.1438%  (       17)
00:10:37.817  10001.478 - 10054.117:   95.2318%  (       12)
00:10:37.817  10054.117 - 10106.757:   95.3125%  (       11)
00:10:37.817  10106.757 - 10159.396:   95.3859%  (       10)
00:10:37.817  10159.396 - 10212.035:   95.4519%  (        9)
00:10:37.817  10212.035 - 10264.675:   95.5032%  (        7)
00:10:37.817  10264.675 - 10317.314:   95.5619%  (        8)
00:10:37.817  10317.314 - 10369.953:   95.6059%  (        6)
00:10:37.817  10369.953 - 10422.593:   95.6573%  (        7)
00:10:37.817  10422.593 - 10475.232:   95.7013%  (        6)
00:10:37.817  10475.232 - 10527.871:   95.7453%  (        6)
00:10:37.817  10527.871 - 10580.511:   95.7967%  (        7)
00:10:37.817  10580.511 - 10633.150:   95.8407%  (        6)
00:10:37.817  10633.150 - 10685.790:   95.8847%  (        6)
00:10:37.817  10685.790 - 10738.429:   95.9360%  (        7)
00:10:37.817  10738.429 - 10791.068:   95.9874%  (        7)
00:10:37.817  10791.068 - 10843.708:   96.0314%  (        6)
00:10:37.817  10843.708 - 10896.347:   96.0754%  (        6)
00:10:37.817  10896.347 - 10948.986:   96.1341%  (        8)
00:10:37.817  10948.986 - 11001.626:   96.1708%  (        5)
00:10:37.817  11001.626 - 11054.265:   96.2001%  (        4)
00:10:37.817  11054.265 - 11106.904:   96.2148%  (        2)
00:10:37.817  11106.904 - 11159.544:   96.2368%  (        3)
00:10:37.817  11159.544 - 11212.183:   96.2735%  (        5)
00:10:37.817  11212.183 - 11264.822:   96.2881%  (        2)
00:10:37.817  11264.822 - 11317.462:   96.3102%  (        3)
00:10:37.817  11317.462 - 11370.101:   96.3322%  (        3)
00:10:37.817  11370.101 - 11422.741:   96.3468%  (        2)
00:10:37.817  11422.741 - 11475.380:   96.3688%  (        3)
00:10:37.817  11475.380 - 11528.019:   96.3835%  (        2)
00:10:37.817  11528.019 - 11580.659:   96.4055%  (        3)
00:10:37.817  11580.659 - 11633.298:   96.4275%  (        3)
00:10:37.817  11633.298 - 11685.937:   96.4495%  (        3)
00:10:37.817  11685.937 - 11738.577:   96.4862%  (        5)
00:10:37.817  11738.577 - 11791.216:   96.5156%  (        4)
00:10:37.817  11791.216 - 11843.855:   96.5449%  (        4)
00:10:37.817  11843.855 - 11896.495:   96.5816%  (        5)
00:10:37.817  11896.495 - 11949.134:   96.6109%  (        4)
00:10:37.817  11949.134 - 12001.773:   96.6549%  (        6)
00:10:37.817  12001.773 - 12054.413:   96.6916%  (        5)
00:10:37.817  12054.413 - 12107.052:   96.7283%  (        5)
00:10:37.818  12107.052 - 12159.692:   96.7576%  (        4)
00:10:37.818  12159.692 - 12212.331:   96.7870%  (        4)
00:10:37.818  12212.331 - 12264.970:   96.8237%  (        5)
00:10:37.818  12264.970 - 12317.610:   96.8530%  (        4)
00:10:37.818  12317.610 - 12370.249:   96.8897%  (        5)
00:10:37.818  12370.249 - 12422.888:   96.9190%  (        4)
00:10:37.818  12422.888 - 12475.528:   96.9484%  (        4)
00:10:37.818  12475.528 - 12528.167:   96.9630%  (        2)
00:10:37.818  12528.167 - 12580.806:   96.9850%  (        3)
00:10:37.818  12580.806 - 12633.446:   96.9997%  (        2)
00:10:37.818  12633.446 - 12686.085:   97.0144%  (        2)
00:10:37.818  12686.085 - 12738.724:   97.0290%  (        2)
00:10:37.818  12738.724 - 12791.364:   97.0437%  (        2)
00:10:37.818  12791.364 - 12844.003:   97.0584%  (        2)
00:10:37.818  12844.003 - 12896.643:   97.0804%  (        3)
00:10:37.818  12896.643 - 12949.282:   97.0951%  (        2)
00:10:37.818  12949.282 - 13001.921:   97.1097%  (        2)
00:10:37.818  13001.921 - 13054.561:   97.1244%  (        2)
00:10:37.818  13054.561 - 13107.200:   97.1391%  (        2)
00:10:37.818  13107.200 - 13159.839:   97.1538%  (        2)
00:10:37.818  13159.839 - 13212.479:   97.1758%  (        3)
00:10:37.818  13212.479 - 13265.118:   97.1831%  (        1)
00:10:37.818  13265.118 - 13317.757:   97.1978%  (        2)
00:10:37.818  13317.757 - 13370.397:   97.2271%  (        4)
00:10:37.818  13370.397 - 13423.036:   97.2491%  (        3)
00:10:37.818  13423.036 - 13475.676:   97.2858%  (        5)
00:10:37.818  13475.676 - 13580.954:   97.3371%  (        7)
00:10:37.818  13580.954 - 13686.233:   97.3958%  (        8)
00:10:37.818  13686.233 - 13791.512:   97.4545%  (        8)
00:10:37.818  13791.512 - 13896.790:   97.5425%  (       12)
00:10:37.818  13896.790 - 14002.069:   97.6306%  (       12)
00:10:37.818  14002.069 - 14107.348:   97.7259%  (       13)
00:10:37.818  14107.348 - 14212.627:   97.7920%  (        9)
00:10:37.818  14212.627 - 14317.905:   97.8286%  (        5)
00:10:37.818  14317.905 - 14423.184:   97.8653%  (        5)
00:10:37.818  14423.184 - 14528.463:   97.9020%  (        5)
00:10:37.818  14528.463 - 14633.741:   97.9387%  (        5)
00:10:37.818  14633.741 - 14739.020:   97.9754%  (        5)
00:10:37.818  14739.020 - 14844.299:   98.0120%  (        5)
00:10:37.818  14844.299 - 14949.578:   98.0487%  (        5)
00:10:37.818  14949.578 - 15054.856:   98.0781%  (        4)
00:10:37.818  15054.856 - 15160.135:   98.1074%  (        4)
00:10:37.818  15160.135 - 15265.414:   98.1221%  (        2)
00:10:37.818  16844.594 - 16949.873:   98.1587%  (        5)
00:10:37.818  16949.873 - 17055.152:   98.1954%  (        5)
00:10:37.818  17055.152 - 17160.431:   98.2394%  (        6)
00:10:37.818  17160.431 - 17265.709:   98.2835%  (        6)
00:10:37.818  17265.709 - 17370.988:   98.3201%  (        5)
00:10:37.818  17370.988 - 17476.267:   98.3641%  (        6)
00:10:37.818  17476.267 - 17581.545:   98.4008%  (        5)
00:10:37.818  17581.545 - 17686.824:   98.4375%  (        5)
00:10:37.818  17686.824 - 17792.103:   98.4742%  (        5)
00:10:37.818  17792.103 - 17897.382:   98.5182%  (        6)
00:10:37.818  17897.382 - 18002.660:   98.5549%  (        5)
00:10:37.818  18002.660 - 18107.939:   98.5915%  (        5)
00:10:37.818  18318.496 - 18423.775:   98.6062%  (        2)
00:10:37.818  18423.775 - 18529.054:   98.6576%  (        7)
00:10:37.818  18529.054 - 18634.333:   98.6942%  (        5)
00:10:37.818  18634.333 - 18739.611:   98.7456%  (        7)
00:10:37.818  18739.611 - 18844.890:   98.7969%  (        7)
00:10:37.818  18844.890 - 18950.169:   98.8483%  (        7)
00:10:37.818  18950.169 - 19055.447:   98.8923%  (        6)
00:10:37.818  19055.447 - 19160.726:   98.9437%  (        7)
00:10:37.818  19160.726 - 19266.005:   98.9950%  (        7)
00:10:37.818  19266.005 - 19371.284:   99.0390%  (        6)
00:10:37.818  19371.284 - 19476.562:   99.0610%  (        3)
00:10:37.818  40848.141 - 41058.699:   99.0684%  (        1)
00:10:37.818  41058.699 - 41269.256:   99.1197%  (        7)
00:10:37.818  41269.256 - 41479.814:   99.1711%  (        7)
00:10:37.818  41479.814 - 41690.371:   99.2298%  (        8)
00:10:37.818  41690.371 - 41900.929:   99.2884%  (        8)
00:10:37.818  41900.929 - 42111.486:   99.3471%  (        8)
00:10:37.818  42111.486 - 42322.043:   99.3985%  (        7)
00:10:37.818  42322.043 - 42532.601:   99.4645%  (        9)
00:10:37.818  42532.601 - 42743.158:   99.5158%  (        7)
00:10:37.818  42743.158 - 42953.716:   99.5305%  (        2)
00:10:37.818  47796.537 - 48007.094:   99.5525%  (        3)
00:10:37.818  48007.094 - 48217.651:   99.6185%  (        9)
00:10:37.818  48217.651 - 48428.209:   99.6772%  (        8)
00:10:37.818  48428.209 - 48638.766:   99.7286%  (        7)
00:10:37.818  48638.766 - 48849.324:   99.7873%  (        8)
00:10:37.818  48849.324 - 49059.881:   99.8460%  (        8)
00:10:37.818  49059.881 - 49270.439:   99.9046%  (        8)
00:10:37.818  49270.439 - 49480.996:   99.9633%  (        8)
00:10:37.818  49480.996 - 49691.553:  100.0000%  (        5)
00:10:37.818  
00:10:37.818  Latency histogram for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:10:37.818  ==============================================================================
00:10:37.818         Range in us     Cumulative    IO count
00:10:37.818   7948.543 -  8001.182:    0.0220%  (        3)
00:10:37.818   8001.182 -  8053.822:    0.0440%  (        3)
00:10:37.818   8053.822 -  8106.461:    0.1834%  (       19)
00:10:37.818   8106.461 -  8159.100:    0.4035%  (       30)
00:10:37.818   8159.100 -  8211.740:    0.7409%  (       46)
00:10:37.818   8211.740 -  8264.379:    1.7752%  (      141)
00:10:37.818   8264.379 -  8317.018:    3.0810%  (      178)
00:10:37.818   8317.018 -  8369.658:    5.0910%  (      274)
00:10:37.818   8369.658 -  8422.297:    7.6511%  (      349)
00:10:37.818   8422.297 -  8474.937:   10.9522%  (      450)
00:10:37.818   8474.937 -  8527.576:   14.8034%  (      525)
00:10:37.818   8527.576 -  8580.215:   19.0214%  (      575)
00:10:37.818   8580.215 -  8632.855:   23.8043%  (      652)
00:10:37.818   8632.855 -  8685.494:   28.7045%  (      668)
00:10:37.818   8685.494 -  8738.133:   33.9422%  (      714)
00:10:37.818   8738.133 -  8790.773:   39.2459%  (      723)
00:10:37.818   8790.773 -  8843.412:   44.7843%  (      755)
00:10:37.818   8843.412 -  8896.051:   50.3815%  (      763)
00:10:37.818   8896.051 -  8948.691:   55.8759%  (      749)
00:10:37.818   8948.691 -  9001.330:   61.3410%  (      745)
00:10:37.818   9001.330 -  9053.969:   66.6006%  (      717)
00:10:37.818   9053.969 -  9106.609:   71.5082%  (      669)
00:10:37.818   9106.609 -  9159.248:   76.0270%  (      616)
00:10:37.818   9159.248 -  9211.888:   79.9222%  (      531)
00:10:37.818   9211.888 -  9264.527:   83.2380%  (      452)
00:10:37.818   9264.527 -  9317.166:   85.9302%  (      367)
00:10:37.818   9317.166 -  9369.806:   88.1015%  (      296)
00:10:37.818   9369.806 -  9422.445:   89.8107%  (      233)
00:10:37.818   9422.445 -  9475.084:   91.2192%  (      192)
00:10:37.818   9475.084 -  9527.724:   92.2388%  (      139)
00:10:37.818   9527.724 -  9580.363:   93.0604%  (      112)
00:10:37.818   9580.363 -  9633.002:   93.6106%  (       75)
00:10:37.818   9633.002 -  9685.642:   94.0434%  (       59)
00:10:37.818   9685.642 -  9738.281:   94.3662%  (       44)
00:10:37.818   9738.281 -  9790.920:   94.6303%  (       36)
00:10:37.818   9790.920 -  9843.560:   94.8137%  (       25)
00:10:37.818   9843.560 -  9896.199:   94.9971%  (       25)
00:10:37.818   9896.199 -  9948.839:   95.1731%  (       24)
00:10:37.818   9948.839 - 10001.478:   95.3125%  (       19)
00:10:37.818  10001.478 - 10054.117:   95.4079%  (       13)
00:10:37.818  10054.117 - 10106.757:   95.4959%  (       12)
00:10:37.818  10106.757 - 10159.396:   95.5913%  (       13)
00:10:37.818  10159.396 - 10212.035:   95.6866%  (       13)
00:10:37.818  10212.035 - 10264.675:   95.7600%  (       10)
00:10:37.818  10264.675 - 10317.314:   95.8187%  (        8)
00:10:37.818  10317.314 - 10369.953:   95.8700%  (        7)
00:10:37.818  10369.953 - 10422.593:   95.9140%  (        6)
00:10:37.818  10422.593 - 10475.232:   95.9654%  (        7)
00:10:37.818  10475.232 - 10527.871:   96.0167%  (        7)
00:10:37.818  10527.871 - 10580.511:   96.0607%  (        6)
00:10:37.818  10580.511 - 10633.150:   96.1048%  (        6)
00:10:37.818  10633.150 - 10685.790:   96.1414%  (        5)
00:10:37.818  10685.790 - 10738.429:   96.1561%  (        2)
00:10:37.818  10738.429 - 10791.068:   96.1781%  (        3)
00:10:37.818  10791.068 - 10843.708:   96.1928%  (        2)
00:10:37.818  10843.708 - 10896.347:   96.2075%  (        2)
00:10:37.818  10896.347 - 10948.986:   96.2295%  (        3)
00:10:37.818  10948.986 - 11001.626:   96.2661%  (        5)
00:10:37.818  11001.626 - 11054.265:   96.2808%  (        2)
00:10:37.818  11054.265 - 11106.904:   96.3028%  (        3)
00:10:37.818  11106.904 - 11159.544:   96.3322%  (        4)
00:10:37.818  11159.544 - 11212.183:   96.3688%  (        5)
00:10:37.818  11212.183 - 11264.822:   96.4055%  (        5)
00:10:37.818  11264.822 - 11317.462:   96.4349%  (        4)
00:10:37.818  11317.462 - 11370.101:   96.4715%  (        5)
00:10:37.818  11370.101 - 11422.741:   96.5009%  (        4)
00:10:37.818  11422.741 - 11475.380:   96.5302%  (        4)
00:10:37.818  11475.380 - 11528.019:   96.5596%  (        4)
00:10:37.818  11528.019 - 11580.659:   96.6036%  (        6)
00:10:37.818  11580.659 - 11633.298:   96.6403%  (        5)
00:10:37.818  11633.298 - 11685.937:   96.6696%  (        4)
00:10:37.818  11685.937 - 11738.577:   96.7063%  (        5)
00:10:37.818  11738.577 - 11791.216:   96.7356%  (        4)
00:10:37.818  11791.216 - 11843.855:   96.7650%  (        4)
00:10:37.818  11843.855 - 11896.495:   96.8016%  (        5)
00:10:37.818  11896.495 - 11949.134:   96.8310%  (        4)
00:10:37.818  11949.134 - 12001.773:   96.8603%  (        4)
00:10:37.818  12001.773 - 12054.413:   96.8970%  (        5)
00:10:37.818  12054.413 - 12107.052:   96.9263%  (        4)
00:10:37.818  12107.052 - 12159.692:   96.9630%  (        5)
00:10:37.818  12159.692 - 12212.331:   97.0070%  (        6)
00:10:37.818  12212.331 - 12264.970:   97.0364%  (        4)
00:10:37.818  12264.970 - 12317.610:   97.0657%  (        4)
00:10:37.818  12317.610 - 12370.249:   97.0804%  (        2)
00:10:37.818  12370.249 - 12422.888:   97.0951%  (        2)
00:10:37.818  12422.888 - 12475.528:   97.1171%  (        3)
00:10:37.818  12475.528 - 12528.167:   97.1317%  (        2)
00:10:37.818  12528.167 - 12580.806:   97.1464%  (        2)
00:10:37.818  12580.806 - 12633.446:   97.1538%  (        1)
00:10:37.818  12633.446 - 12686.085:   97.1684%  (        2)
00:10:37.818  12686.085 - 12738.724:   97.1978%  (        4)
00:10:37.818  12738.724 - 12791.364:   97.2051%  (        1)
00:10:37.818  12791.364 - 12844.003:   97.2271%  (        3)
00:10:37.818  12844.003 - 12896.643:   97.2418%  (        2)
00:10:37.818  12896.643 - 12949.282:   97.2565%  (        2)
00:10:37.818  12949.282 - 13001.921:   97.2711%  (        2)
00:10:37.818  13001.921 - 13054.561:   97.2858%  (        2)
00:10:37.818  13054.561 - 13107.200:   97.3005%  (        2)
00:10:37.818  13107.200 - 13159.839:   97.3225%  (        3)
00:10:37.818  13159.839 - 13212.479:   97.3371%  (        2)
00:10:37.818  13212.479 - 13265.118:   97.3518%  (        2)
00:10:37.818  13265.118 - 13317.757:   97.3665%  (        2)
00:10:37.818  13317.757 - 13370.397:   97.3812%  (        2)
00:10:37.818  13370.397 - 13423.036:   97.3958%  (        2)
00:10:37.818  13423.036 - 13475.676:   97.4105%  (        2)
00:10:37.818  13475.676 - 13580.954:   97.4398%  (        4)
00:10:37.818  13580.954 - 13686.233:   97.4692%  (        4)
00:10:37.818  13686.233 - 13791.512:   97.5059%  (        5)
00:10:37.818  13791.512 - 13896.790:   97.5352%  (        4)
00:10:37.818  13896.790 - 14002.069:   97.5646%  (        4)
00:10:37.818  14002.069 - 14107.348:   97.5939%  (        4)
00:10:37.818  14107.348 - 14212.627:   97.6232%  (        4)
00:10:37.818  14212.627 - 14317.905:   97.6526%  (        4)
00:10:37.818  14739.020 - 14844.299:   97.6893%  (        5)
00:10:37.818  14844.299 - 14949.578:   97.7333%  (        6)
00:10:37.818  14949.578 - 15054.856:   97.7920%  (        8)
00:10:37.818  15054.856 - 15160.135:   97.8286%  (        5)
00:10:37.818  15160.135 - 15265.414:   97.8800%  (        7)
00:10:37.818  15265.414 - 15370.692:   97.9313%  (        7)
00:10:37.818  15370.692 - 15475.971:   97.9754%  (        6)
00:10:37.818  15475.971 - 15581.250:   98.0267%  (        7)
00:10:37.818  15581.250 - 15686.529:   98.0781%  (        7)
00:10:37.818  15686.529 - 15791.807:   98.1221%  (        6)
00:10:37.818  15897.086 - 16002.365:   98.1294%  (        1)
00:10:37.818  16002.365 - 16107.643:   98.1661%  (        5)
00:10:37.818  16107.643 - 16212.922:   98.2028%  (        5)
00:10:37.818  16212.922 - 16318.201:   98.2394%  (        5)
00:10:37.818  16318.201 - 16423.480:   98.2761%  (        5)
00:10:37.818  16423.480 - 16528.758:   98.3128%  (        5)
00:10:37.818  16528.758 - 16634.037:   98.3495%  (        5)
00:10:37.818  16634.037 - 16739.316:   98.3862%  (        5)
00:10:37.818  16739.316 - 16844.594:   98.4228%  (        5)
00:10:37.818  16844.594 - 16949.873:   98.4522%  (        4)
00:10:37.818  16949.873 - 17055.152:   98.4888%  (        5)
00:10:37.818  17055.152 - 17160.431:   98.5255%  (        5)
00:10:37.818  17160.431 - 17265.709:   98.5549%  (        4)
00:10:37.818  17265.709 - 17370.988:   98.5915%  (        5)
00:10:37.818  19371.284 - 19476.562:   98.5989%  (        1)
00:10:37.818  19476.562 - 19581.841:   98.6282%  (        4)
00:10:37.818  19581.841 - 19687.120:   98.6722%  (        6)
00:10:37.818  19687.120 - 19792.398:   98.7016%  (        4)
00:10:37.818  19792.398 - 19897.677:   98.7456%  (        6)
00:10:37.818  19897.677 - 20002.956:   98.7823%  (        5)
00:10:37.818  20002.956 - 20108.235:   98.8263%  (        6)
00:10:37.818  20108.235 - 20213.513:   98.8630%  (        5)
00:10:37.818  20213.513 - 20318.792:   98.9070%  (        6)
00:10:37.818  20318.792 - 20424.071:   98.9510%  (        6)
00:10:37.818  20424.071 - 20529.349:   98.9877%  (        5)
00:10:37.818  20529.349 - 20634.628:   99.0317%  (        6)
00:10:37.818  20634.628 - 20739.907:   99.0610%  (        4)
00:10:37.818  39795.354 - 40005.912:   99.1197%  (        8)
00:10:37.818  40005.912 - 40216.469:   99.1711%  (        7)
00:10:37.818  40216.469 - 40427.027:   99.2298%  (        8)
00:10:37.818  40427.027 - 40637.584:   99.2811%  (        7)
00:10:37.818  40637.584 - 40848.141:   99.3471%  (        9)
00:10:37.818  40848.141 - 41058.699:   99.3985%  (        7)
00:10:37.818  41058.699 - 41269.256:   99.4498%  (        7)
00:10:37.818  41269.256 - 41479.814:   99.5085%  (        8)
00:10:37.818  41479.814 - 41690.371:   99.5305%  (        3)
00:10:37.818  46743.749 - 46954.307:   99.5892%  (        8)
00:10:37.818  46954.307 - 47164.864:   99.6479%  (        8)
00:10:37.818  47164.864 - 47375.422:   99.6992%  (        7)
00:10:37.818  47375.422 - 47585.979:   99.7579%  (        8)
00:10:37.818  47585.979 - 47796.537:   99.8166%  (        8)
00:10:37.818  47796.537 - 48007.094:   99.8753%  (        8)
00:10:37.818  48007.094 - 48217.651:   99.9266%  (        7)
00:10:37.818  48217.651 - 48428.209:   99.9927%  (        9)
00:10:37.818  48428.209 - 48638.766:  100.0000%  (        1)
00:10:37.818  
00:10:37.818  Latency histogram for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:10:37.818  ==============================================================================
00:10:37.818         Range in us     Cumulative    IO count
00:10:37.818   8001.182 -  8053.822:    0.0367%  (        5)
00:10:37.818   8053.822 -  8106.461:    0.1467%  (       15)
00:10:37.818   8106.461 -  8159.100:    0.4475%  (       41)
00:10:37.818   8159.100 -  8211.740:    0.9096%  (       63)
00:10:37.818   8211.740 -  8264.379:    1.7312%  (      112)
00:10:37.818   8264.379 -  8317.018:    3.1543%  (      194)
00:10:37.818   8317.018 -  8369.658:    5.1350%  (      270)
00:10:37.818   8369.658 -  8422.297:    7.8565%  (      371)
00:10:37.818   8422.297 -  8474.937:   11.3043%  (      470)
00:10:37.818   8474.937 -  8527.576:   14.9501%  (      497)
00:10:37.818   8527.576 -  8580.215:   19.0508%  (      559)
00:10:37.818   8580.215 -  8632.855:   23.8630%  (      656)
00:10:37.818   8632.855 -  8685.494:   28.6972%  (      659)
00:10:37.818   8685.494 -  8738.133:   33.8615%  (      704)
00:10:37.818   8738.133 -  8790.773:   39.2092%  (      729)
00:10:37.818   8790.773 -  8843.412:   44.8283%  (      766)
00:10:37.818   8843.412 -  8896.051:   50.3521%  (      753)
00:10:37.818   8896.051 -  8948.691:   55.9126%  (      758)
00:10:37.818   8948.691 -  9001.330:   61.2969%  (      734)
00:10:37.818   9001.330 -  9053.969:   66.4246%  (      699)
00:10:37.818   9053.969 -  9106.609:   71.2735%  (      661)
00:10:37.818   9106.609 -  9159.248:   75.6455%  (      596)
00:10:37.818   9159.248 -  9211.888:   79.4528%  (      519)
00:10:37.818   9211.888 -  9264.527:   82.6658%  (      438)
00:10:37.818   9264.527 -  9317.166:   85.2626%  (      354)
00:10:37.818   9317.166 -  9369.806:   87.5293%  (      309)
00:10:37.818   9369.806 -  9422.445:   89.3266%  (      245)
00:10:37.818   9422.445 -  9475.084:   90.6543%  (      181)
00:10:37.818   9475.084 -  9527.724:   91.6887%  (      141)
00:10:37.818   9527.724 -  9580.363:   92.5029%  (      111)
00:10:37.818   9580.363 -  9633.002:   93.1631%  (       90)
00:10:37.818   9633.002 -  9685.642:   93.6180%  (       62)
00:10:37.818   9685.642 -  9738.281:   93.9627%  (       47)
00:10:37.818   9738.281 -  9790.920:   94.2195%  (       35)
00:10:37.818   9790.920 -  9843.560:   94.4982%  (       38)
00:10:37.818   9843.560 -  9896.199:   94.7183%  (       30)
00:10:37.818   9896.199 -  9948.839:   94.8944%  (       24)
00:10:37.818   9948.839 - 10001.478:   95.0191%  (       17)
00:10:37.818  10001.478 - 10054.117:   95.1438%  (       17)
00:10:37.818  10054.117 - 10106.757:   95.2612%  (       16)
00:10:37.818  10106.757 - 10159.396:   95.3785%  (       16)
00:10:37.818  10159.396 - 10212.035:   95.5032%  (       17)
00:10:37.818  10212.035 - 10264.675:   95.6279%  (       17)
00:10:37.818  10264.675 - 10317.314:   95.7086%  (       11)
00:10:37.818  10317.314 - 10369.953:   95.8113%  (       14)
00:10:37.818  10369.953 - 10422.593:   95.8994%  (       12)
00:10:37.818  10422.593 - 10475.232:   95.9874%  (       12)
00:10:37.818  10475.232 - 10527.871:   96.0387%  (        7)
00:10:37.818  10527.871 - 10580.511:   96.0754%  (        5)
00:10:37.818  10580.511 - 10633.150:   96.1048%  (        4)
00:10:37.818  10633.150 - 10685.790:   96.1341%  (        4)
00:10:37.818  10685.790 - 10738.429:   96.1781%  (        6)
00:10:37.818  10738.429 - 10791.068:   96.2075%  (        4)
00:10:37.818  10791.068 - 10843.708:   96.2368%  (        4)
00:10:37.818  10843.708 - 10896.347:   96.2588%  (        3)
00:10:37.818  10896.347 - 10948.986:   96.2735%  (        2)
00:10:37.818  10948.986 - 11001.626:   96.2955%  (        3)
00:10:37.818  11001.626 - 11054.265:   96.3102%  (        2)
00:10:37.818  11054.265 - 11106.904:   96.3322%  (        3)
00:10:37.818  11106.904 - 11159.544:   96.3468%  (        2)
00:10:37.818  11159.544 - 11212.183:   96.3688%  (        3)
00:10:37.818  11212.183 - 11264.822:   96.3835%  (        2)
00:10:37.818  11264.822 - 11317.462:   96.4202%  (        5)
00:10:37.818  11317.462 - 11370.101:   96.4569%  (        5)
00:10:37.818  11370.101 - 11422.741:   96.4862%  (        4)
00:10:37.818  11422.741 - 11475.380:   96.5229%  (        5)
00:10:37.818  11475.380 - 11528.019:   96.5669%  (        6)
00:10:37.818  11528.019 - 11580.659:   96.5962%  (        4)
00:10:37.818  11580.659 - 11633.298:   96.6403%  (        6)
00:10:37.818  11633.298 - 11685.937:   96.6696%  (        4)
00:10:37.818  11685.937 - 11738.577:   96.7136%  (        6)
00:10:37.818  11738.577 - 11791.216:   96.7430%  (        4)
00:10:37.818  11791.216 - 11843.855:   96.7870%  (        6)
00:10:37.818  11843.855 - 11896.495:   96.8163%  (        4)
00:10:37.818  11896.495 - 11949.134:   96.8530%  (        5)
00:10:37.818  11949.134 - 12001.773:   96.8897%  (        5)
00:10:37.818  12001.773 - 12054.413:   96.9190%  (        4)
00:10:37.818  12054.413 - 12107.052:   96.9630%  (        6)
00:10:37.818  12107.052 - 12159.692:   96.9997%  (        5)
00:10:37.818  12159.692 - 12212.331:   97.0364%  (        5)
00:10:37.818  12212.331 - 12264.970:   97.0657%  (        4)
00:10:37.818  12264.970 - 12317.610:   97.0877%  (        3)
00:10:37.818  12317.610 - 12370.249:   97.1097%  (        3)
00:10:37.818  12370.249 - 12422.888:   97.1317%  (        3)
00:10:37.818  12422.888 - 12475.528:   97.1538%  (        3)
00:10:37.818  12475.528 - 12528.167:   97.1831%  (        4)
00:10:37.818  12528.167 - 12580.806:   97.2198%  (        5)
00:10:37.818  12580.806 - 12633.446:   97.2344%  (        2)
00:10:37.818  12633.446 - 12686.085:   97.2491%  (        2)
00:10:37.818  12686.085 - 12738.724:   97.2638%  (        2)
00:10:37.818  12738.724 - 12791.364:   97.2785%  (        2)
00:10:37.818  12791.364 - 12844.003:   97.2931%  (        2)
00:10:37.818  12844.003 - 12896.643:   97.3151%  (        3)
00:10:37.818  12896.643 - 12949.282:   97.3298%  (        2)
00:10:37.818  12949.282 - 13001.921:   97.3445%  (        2)
00:10:37.818  13001.921 - 13054.561:   97.3592%  (        2)
00:10:37.818  13054.561 - 13107.200:   97.3738%  (        2)
00:10:37.818  13107.200 - 13159.839:   97.3958%  (        3)
00:10:37.818  13159.839 - 13212.479:   97.4105%  (        2)
00:10:37.818  13212.479 - 13265.118:   97.4252%  (        2)
00:10:37.818  13265.118 - 13317.757:   97.4398%  (        2)
00:10:37.818  13317.757 - 13370.397:   97.4545%  (        2)
00:10:37.818  13370.397 - 13423.036:   97.4692%  (        2)
00:10:37.818  13423.036 - 13475.676:   97.4839%  (        2)
00:10:37.818  13475.676 - 13580.954:   97.5132%  (        4)
00:10:37.818  13580.954 - 13686.233:   97.5425%  (        4)
00:10:37.818  13686.233 - 13791.512:   97.5792%  (        5)
00:10:37.819  13791.512 - 13896.790:   97.6086%  (        4)
00:10:37.819  13896.790 - 14002.069:   97.6379%  (        4)
00:10:37.819  14002.069 - 14107.348:   97.6526%  (        2)
00:10:37.819  15370.692 - 15475.971:   97.6599%  (        1)
00:10:37.819  15475.971 - 15581.250:   97.7259%  (        9)
00:10:37.819  15581.250 - 15686.529:   97.7700%  (        6)
00:10:37.819  15686.529 - 15791.807:   97.8433%  (       10)
00:10:37.819  15791.807 - 15897.086:   97.9313%  (       12)
00:10:37.819  15897.086 - 16002.365:   98.0194%  (       12)
00:10:37.819  16002.365 - 16107.643:   98.1001%  (       11)
00:10:37.819  16107.643 - 16212.922:   98.1881%  (       12)
00:10:37.819  16212.922 - 16318.201:   98.2761%  (       12)
00:10:37.819  16318.201 - 16423.480:   98.3495%  (       10)
00:10:37.819  16423.480 - 16528.758:   98.4008%  (        7)
00:10:37.819  16528.758 - 16634.037:   98.4375%  (        5)
00:10:37.819  16634.037 - 16739.316:   98.4742%  (        5)
00:10:37.819  16739.316 - 16844.594:   98.5109%  (        5)
00:10:37.819  16844.594 - 16949.873:   98.5402%  (        4)
00:10:37.819  16949.873 - 17055.152:   98.5769%  (        5)
00:10:37.819  17055.152 - 17160.431:   98.5915%  (        2)
00:10:37.819  18844.890 - 18950.169:   98.5989%  (        1)
00:10:37.819  18950.169 - 19055.447:   98.6429%  (        6)
00:10:37.819  19055.447 - 19160.726:   98.6869%  (        6)
00:10:37.819  19160.726 - 19266.005:   98.7236%  (        5)
00:10:37.819  19266.005 - 19371.284:   98.7676%  (        6)
00:10:37.819  19371.284 - 19476.562:   98.8043%  (        5)
00:10:37.819  19476.562 - 19581.841:   98.8483%  (        6)
00:10:37.819  19581.841 - 19687.120:   98.8850%  (        5)
00:10:37.819  19687.120 - 19792.398:   98.9290%  (        6)
00:10:37.819  19792.398 - 19897.677:   98.9657%  (        5)
00:10:37.819  19897.677 - 20002.956:   99.0097%  (        6)
00:10:37.819  20002.956 - 20108.235:   99.0464%  (        5)
00:10:37.819  20108.235 - 20213.513:   99.0610%  (        2)
00:10:37.819  37900.337 - 38110.895:   99.1197%  (        8)
00:10:37.819  38110.895 - 38321.452:   99.1711%  (        7)
00:10:37.819  38321.452 - 38532.010:   99.2298%  (        8)
00:10:37.819  38532.010 - 38742.567:   99.2884%  (        8)
00:10:37.819  38742.567 - 38953.124:   99.3471%  (        8)
00:10:37.819  38953.124 - 39163.682:   99.3985%  (        7)
00:10:37.819  39163.682 - 39374.239:   99.4572%  (        8)
00:10:37.819  39374.239 - 39584.797:   99.5085%  (        7)
00:10:37.819  39584.797 - 39795.354:   99.5305%  (        3)
00:10:37.819  44848.733 - 45059.290:   99.5672%  (        5)
00:10:37.819  45059.290 - 45269.847:   99.6185%  (        7)
00:10:37.819  45269.847 - 45480.405:   99.6772%  (        8)
00:10:37.819  45480.405 - 45690.962:   99.7433%  (        9)
00:10:37.819  45690.962 - 45901.520:   99.7946%  (        7)
00:10:37.819  45901.520 - 46112.077:   99.8533%  (        8)
00:10:37.819  46112.077 - 46322.635:   99.9120%  (        8)
00:10:37.819  46322.635 - 46533.192:   99.9707%  (        8)
00:10:37.819  46533.192 - 46743.749:  100.0000%  (        4)
00:10:37.819  
00:10:37.819  Latency histogram for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:10:37.819  ==============================================================================
00:10:37.819         Range in us     Cumulative    IO count
00:10:37.819   8001.182 -  8053.822:    0.0660%  (        9)
00:10:37.819   8053.822 -  8106.461:    0.1247%  (        8)
00:10:37.819   8106.461 -  8159.100:    0.4401%  (       43)
00:10:37.819   8159.100 -  8211.740:    0.9390%  (       68)
00:10:37.819   8211.740 -  8264.379:    1.7606%  (      112)
00:10:37.819   8264.379 -  8317.018:    3.1690%  (      192)
00:10:37.819   8317.018 -  8369.658:    5.0836%  (      261)
00:10:37.819   8369.658 -  8422.297:    7.8565%  (      378)
00:10:37.819   8422.297 -  8474.937:   11.1722%  (      452)
00:10:37.819   8474.937 -  8527.576:   14.8548%  (      502)
00:10:37.819   8527.576 -  8580.215:   19.1094%  (      580)
00:10:37.819   8580.215 -  8632.855:   23.7676%  (      635)
00:10:37.819   8632.855 -  8685.494:   28.7119%  (      674)
00:10:37.819   8685.494 -  8738.133:   33.9935%  (      720)
00:10:37.819   8738.133 -  8790.773:   39.3413%  (      729)
00:10:37.819   8790.773 -  8843.412:   44.7990%  (      744)
00:10:37.819   8843.412 -  8896.051:   50.3154%  (      752)
00:10:37.819   8896.051 -  8948.691:   55.9272%  (      765)
00:10:37.819   8948.691 -  9001.330:   61.3483%  (      739)
00:10:37.819   9001.330 -  9053.969:   66.5200%  (      705)
00:10:37.819   9053.969 -  9106.609:   71.2955%  (      651)
00:10:37.819   9106.609 -  9159.248:   75.7336%  (      605)
00:10:37.819   9159.248 -  9211.888:   79.6875%  (      539)
00:10:37.819   9211.888 -  9264.527:   82.9812%  (      449)
00:10:37.819   9264.527 -  9317.166:   85.5120%  (      345)
00:10:37.819   9317.166 -  9369.806:   87.6320%  (      289)
00:10:37.819   9369.806 -  9422.445:   89.4219%  (      244)
00:10:37.819   9422.445 -  9475.084:   90.7570%  (      182)
00:10:37.819   9475.084 -  9527.724:   91.6887%  (      127)
00:10:37.819   9527.724 -  9580.363:   92.3929%  (       96)
00:10:37.819   9580.363 -  9633.002:   92.9064%  (       70)
00:10:37.819   9633.002 -  9685.642:   93.3906%  (       66)
00:10:37.819   9685.642 -  9738.281:   93.7427%  (       48)
00:10:37.819   9738.281 -  9790.920:   94.0067%  (       36)
00:10:37.819   9790.920 -  9843.560:   94.2195%  (       29)
00:10:37.819   9843.560 -  9896.199:   94.3589%  (       19)
00:10:37.819   9896.199 -  9948.839:   94.5202%  (       22)
00:10:37.819   9948.839 - 10001.478:   94.6376%  (       16)
00:10:37.819  10001.478 - 10054.117:   94.7256%  (       12)
00:10:37.819  10054.117 - 10106.757:   94.8210%  (       13)
00:10:37.819  10106.757 - 10159.396:   94.9164%  (       13)
00:10:37.819  10159.396 - 10212.035:   95.0191%  (       14)
00:10:37.819  10212.035 - 10264.675:   95.1144%  (       13)
00:10:37.819  10264.675 - 10317.314:   95.2098%  (       13)
00:10:37.819  10317.314 - 10369.953:   95.2905%  (       11)
00:10:37.819  10369.953 - 10422.593:   95.3565%  (        9)
00:10:37.819  10422.593 - 10475.232:   95.4152%  (        8)
00:10:37.819  10475.232 - 10527.871:   95.4665%  (        7)
00:10:37.819  10527.871 - 10580.511:   95.5766%  (       15)
00:10:37.819  10580.511 - 10633.150:   95.6719%  (       13)
00:10:37.819  10633.150 - 10685.790:   95.7380%  (        9)
00:10:37.819  10685.790 - 10738.429:   95.8113%  (       10)
00:10:37.819  10738.429 - 10791.068:   95.8847%  (       10)
00:10:37.819  10791.068 - 10843.708:   95.9507%  (        9)
00:10:37.819  10843.708 - 10896.347:   96.0241%  (       10)
00:10:37.819  10896.347 - 10948.986:   96.0754%  (        7)
00:10:37.819  10948.986 - 11001.626:   96.1268%  (        7)
00:10:37.819  11001.626 - 11054.265:   96.1854%  (        8)
00:10:37.819  11054.265 - 11106.904:   96.2515%  (        9)
00:10:37.819  11106.904 - 11159.544:   96.3175%  (        9)
00:10:37.819  11159.544 - 11212.183:   96.3762%  (        8)
00:10:37.819  11212.183 - 11264.822:   96.4349%  (        8)
00:10:37.819  11264.822 - 11317.462:   96.5082%  (       10)
00:10:37.819  11317.462 - 11370.101:   96.5376%  (        4)
00:10:37.819  11370.101 - 11422.741:   96.5596%  (        3)
00:10:37.819  11422.741 - 11475.380:   96.5742%  (        2)
00:10:37.819  11475.380 - 11528.019:   96.5962%  (        3)
00:10:37.819  11528.019 - 11580.659:   96.6183%  (        3)
00:10:37.819  11580.659 - 11633.298:   96.6623%  (        6)
00:10:37.819  11633.298 - 11685.937:   96.6989%  (        5)
00:10:37.819  11685.937 - 11738.577:   96.7430%  (        6)
00:10:37.819  11738.577 - 11791.216:   96.7723%  (        4)
00:10:37.819  11791.216 - 11843.855:   96.8016%  (        4)
00:10:37.819  11843.855 - 11896.495:   96.8310%  (        4)
00:10:37.819  11896.495 - 11949.134:   96.8457%  (        2)
00:10:37.819  11949.134 - 12001.773:   96.8677%  (        3)
00:10:37.819  12001.773 - 12054.413:   96.8823%  (        2)
00:10:37.819  12054.413 - 12107.052:   96.9043%  (        3)
00:10:37.819  12107.052 - 12159.692:   96.9190%  (        2)
00:10:37.819  12159.692 - 12212.331:   96.9410%  (        3)
00:10:37.819  12212.331 - 12264.970:   96.9557%  (        2)
00:10:37.819  12264.970 - 12317.610:   96.9997%  (        6)
00:10:37.819  12317.610 - 12370.249:   97.0217%  (        3)
00:10:37.819  12370.249 - 12422.888:   97.0584%  (        5)
00:10:37.819  12422.888 - 12475.528:   97.0877%  (        4)
00:10:37.819  12475.528 - 12528.167:   97.1171%  (        4)
00:10:37.819  12528.167 - 12580.806:   97.1611%  (        6)
00:10:37.819  12580.806 - 12633.446:   97.1978%  (        5)
00:10:37.819  12633.446 - 12686.085:   97.2344%  (        5)
00:10:37.819  12686.085 - 12738.724:   97.2711%  (        5)
00:10:37.819  12738.724 - 12791.364:   97.3151%  (        6)
00:10:37.819  12791.364 - 12844.003:   97.3445%  (        4)
00:10:37.819  12844.003 - 12896.643:   97.3738%  (        4)
00:10:37.819  12896.643 - 12949.282:   97.3885%  (        2)
00:10:37.819  12949.282 - 13001.921:   97.4032%  (        2)
00:10:37.819  13001.921 - 13054.561:   97.4178%  (        2)
00:10:37.819  13054.561 - 13107.200:   97.4398%  (        3)
00:10:37.819  13107.200 - 13159.839:   97.4545%  (        2)
00:10:37.819  13159.839 - 13212.479:   97.4692%  (        2)
00:10:37.819  13212.479 - 13265.118:   97.4839%  (        2)
00:10:37.819  13265.118 - 13317.757:   97.4985%  (        2)
00:10:37.819  13317.757 - 13370.397:   97.5132%  (        2)
00:10:37.819  13370.397 - 13423.036:   97.5352%  (        3)
00:10:37.819  13423.036 - 13475.676:   97.5499%  (        2)
00:10:37.819  13475.676 - 13580.954:   97.5792%  (        4)
00:10:37.819  13580.954 - 13686.233:   97.6086%  (        4)
00:10:37.819  13686.233 - 13791.512:   97.6379%  (        4)
00:10:37.819  13791.512 - 13896.790:   97.6526%  (        2)
00:10:37.819  15265.414 - 15370.692:   97.6673%  (        2)
00:10:37.819  15370.692 - 15475.971:   97.7186%  (        7)
00:10:37.819  15475.971 - 15581.250:   97.7479%  (        4)
00:10:37.819  15581.250 - 15686.529:   97.7846%  (        5)
00:10:37.819  15686.529 - 15791.807:   97.8213%  (        5)
00:10:37.819  15791.807 - 15897.086:   97.8506%  (        4)
00:10:37.819  15897.086 - 16002.365:   97.8873%  (        5)
00:10:37.819  16002.365 - 16107.643:   97.9240%  (        5)
00:10:37.819  16107.643 - 16212.922:   98.0047%  (       11)
00:10:37.819  16212.922 - 16318.201:   98.0854%  (       11)
00:10:37.819  16318.201 - 16423.480:   98.1808%  (       13)
00:10:37.819  16423.480 - 16528.758:   98.2761%  (       13)
00:10:37.819  16528.758 - 16634.037:   98.3568%  (       11)
00:10:37.819  16634.037 - 16739.316:   98.4155%  (        8)
00:10:37.819  16739.316 - 16844.594:   98.4668%  (        7)
00:10:37.819  16844.594 - 16949.873:   98.5182%  (        7)
00:10:37.819  16949.873 - 17055.152:   98.5622%  (        6)
00:10:37.819  17055.152 - 17160.431:   98.5915%  (        4)
00:10:37.819  18318.496 - 18423.775:   98.5989%  (        1)
00:10:37.819  18423.775 - 18529.054:   98.6356%  (        5)
00:10:37.819  18529.054 - 18634.333:   98.6869%  (        7)
00:10:37.819  18634.333 - 18739.611:   98.7236%  (        5)
00:10:37.819  18739.611 - 18844.890:   98.7603%  (        5)
00:10:37.819  18844.890 - 18950.169:   98.8043%  (        6)
00:10:37.819  18950.169 - 19055.447:   98.8483%  (        6)
00:10:37.819  19055.447 - 19160.726:   98.8850%  (        5)
00:10:37.819  19160.726 - 19266.005:   98.9290%  (        6)
00:10:37.819  19266.005 - 19371.284:   98.9657%  (        5)
00:10:37.819  19371.284 - 19476.562:   99.0097%  (        6)
00:10:37.819  19476.562 - 19581.841:   99.0390%  (        4)
00:10:37.819  19581.841 - 19687.120:   99.0610%  (        3)
00:10:37.819  36005.320 - 36215.878:   99.0977%  (        5)
00:10:37.819  36215.878 - 36426.435:   99.1491%  (        7)
00:10:37.819  36426.435 - 36636.993:   99.2151%  (        9)
00:10:37.819  36636.993 - 36847.550:   99.2738%  (        8)
00:10:37.819  36847.550 - 37058.108:   99.3398%  (        9)
00:10:37.819  37058.108 - 37268.665:   99.3985%  (        8)
00:10:37.819  37268.665 - 37479.222:   99.4572%  (        8)
00:10:37.819  37479.222 - 37689.780:   99.5158%  (        8)
00:10:37.819  37689.780 - 37900.337:   99.5305%  (        2)
00:10:37.819  42953.716 - 43164.273:   99.5892%  (        8)
00:10:37.819  43164.273 - 43374.831:   99.6479%  (        8)
00:10:37.819  43374.831 - 43585.388:   99.7066%  (        8)
00:10:37.819  43585.388 - 43795.945:   99.7506%  (        6)
00:10:37.819  43795.945 - 44006.503:   99.8093%  (        8)
00:10:37.819  44006.503 - 44217.060:   99.8680%  (        8)
00:10:37.819  44217.060 - 44427.618:   99.9266%  (        8)
00:10:37.819  44427.618 - 44638.175:   99.9853%  (        8)
00:10:37.819  44638.175 - 44848.733:  100.0000%  (        2)
00:10:37.819  
00:10:37.819  Latency histogram for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:10:37.819  ==============================================================================
00:10:37.819         Range in us     Cumulative    IO count
00:10:37.819   8001.182 -  8053.822:    0.0657%  (        9)
00:10:37.819   8053.822 -  8106.461:    0.1533%  (       12)
00:10:37.819   8106.461 -  8159.100:    0.3943%  (       33)
00:10:37.819   8159.100 -  8211.740:    0.8178%  (       58)
00:10:37.819   8211.740 -  8264.379:    1.6647%  (      116)
00:10:37.819   8264.379 -  8317.018:    3.0082%  (      184)
00:10:37.819   8317.018 -  8369.658:    4.9723%  (      269)
00:10:37.819   8369.658 -  8422.297:    7.6884%  (      372)
00:10:37.819   8422.297 -  8474.937:   10.9156%  (      442)
00:10:37.819   8474.937 -  8527.576:   14.6831%  (      516)
00:10:37.819   8527.576 -  8580.215:   19.1224%  (      608)
00:10:37.819   8580.215 -  8632.855:   23.6638%  (      622)
00:10:37.819   8632.855 -  8685.494:   28.7237%  (      693)
00:10:37.819   8685.494 -  8738.133:   33.9004%  (      709)
00:10:37.819   8738.133 -  8790.773:   39.2815%  (      737)
00:10:37.819   8790.773 -  8843.412:   44.6773%  (      739)
00:10:37.819   8843.412 -  8896.051:   50.1533%  (      750)
00:10:37.819   8896.051 -  8948.691:   55.7024%  (      760)
00:10:37.819   8948.691 -  9001.330:   61.1784%  (      750)
00:10:37.819   9001.330 -  9053.969:   66.4282%  (      719)
00:10:37.819   9053.969 -  9106.609:   71.1887%  (      652)
00:10:37.819   9106.609 -  9159.248:   75.4892%  (      589)
00:10:37.819   9159.248 -  9211.888:   79.3151%  (      524)
00:10:37.819   9211.888 -  9264.527:   82.4985%  (      436)
00:10:37.819   9264.527 -  9317.166:   85.1636%  (      365)
00:10:37.819   9317.166 -  9369.806:   87.3394%  (      298)
00:10:37.819   9369.806 -  9422.445:   89.0625%  (      236)
00:10:37.819   9422.445 -  9475.084:   90.3402%  (      175)
00:10:37.819   9475.084 -  9527.724:   91.3186%  (      134)
00:10:37.819   9527.724 -  9580.363:   92.0269%  (       97)
00:10:37.819   9580.363 -  9633.002:   92.6183%  (       81)
00:10:37.819   9633.002 -  9685.642:   93.1075%  (       67)
00:10:37.819   9685.642 -  9738.281:   93.4360%  (       45)
00:10:37.819   9738.281 -  9790.920:   93.6770%  (       33)
00:10:37.819   9790.920 -  9843.560:   93.8887%  (       29)
00:10:37.819   9843.560 -  9896.199:   94.0275%  (       19)
00:10:37.819   9896.199 -  9948.839:   94.1662%  (       19)
00:10:37.819   9948.839 - 10001.478:   94.2903%  (       17)
00:10:37.819  10001.478 - 10054.117:   94.3779%  (       12)
00:10:37.819  10054.117 - 10106.757:   94.4509%  (       10)
00:10:37.819  10106.757 - 10159.396:   94.5312%  (       11)
00:10:37.819  10159.396 - 10212.035:   94.6116%  (       11)
00:10:37.819  10212.035 - 10264.675:   94.7065%  (       13)
00:10:37.819  10264.675 - 10317.314:   94.8233%  (       16)
00:10:37.819  10317.314 - 10369.953:   94.9255%  (       14)
00:10:37.819  10369.953 - 10422.593:   95.0058%  (       11)
00:10:37.819  10422.593 - 10475.232:   95.0935%  (       12)
00:10:37.819  10475.232 - 10527.871:   95.1811%  (       12)
00:10:37.819  10527.871 - 10580.511:   95.2687%  (       12)
00:10:37.819  10580.511 - 10633.150:   95.3271%  (        8)
00:10:37.819  10633.150 - 10685.790:   95.3928%  (        9)
00:10:37.819  10685.790 - 10738.429:   95.4658%  (       10)
00:10:37.819  10738.429 - 10791.068:   95.5388%  (       10)
00:10:37.819  10791.068 - 10843.708:   95.6046%  (        9)
00:10:37.819  10843.708 - 10896.347:   95.6703%  (        9)
00:10:37.819  10896.347 - 10948.986:   95.7287%  (        8)
00:10:37.819  10948.986 - 11001.626:   95.7944%  (        9)
00:10:37.819  11001.626 - 11054.265:   95.8601%  (        9)
00:10:37.819  11054.265 - 11106.904:   95.9039%  (        6)
00:10:37.819  11106.904 - 11159.544:   95.9404%  (        5)
00:10:37.819  11159.544 - 11212.183:   95.9769%  (        5)
00:10:37.819  11212.183 - 11264.822:   96.0134%  (        5)
00:10:37.819  11264.822 - 11317.462:   96.0499%  (        5)
00:10:37.819  11317.462 - 11370.101:   96.0864%  (        5)
00:10:37.819  11370.101 - 11422.741:   96.1230%  (        5)
00:10:37.819  11422.741 - 11475.380:   96.1668%  (        6)
00:10:37.819  11475.380 - 11528.019:   96.2398%  (       10)
00:10:37.819  11528.019 - 11580.659:   96.2909%  (        7)
00:10:37.819  11580.659 - 11633.298:   96.3639%  (       10)
00:10:37.819  11633.298 - 11685.937:   96.3931%  (        4)
00:10:37.819  11685.937 - 11738.577:   96.4223%  (        4)
00:10:37.819  11738.577 - 11791.216:   96.4515%  (        4)
00:10:37.819  11791.216 - 11843.855:   96.4807%  (        4)
00:10:37.819  11843.855 - 11896.495:   96.5318%  (        7)
00:10:37.819  11896.495 - 11949.134:   96.5756%  (        6)
00:10:37.819  11949.134 - 12001.773:   96.6414%  (        9)
00:10:37.819  12001.773 - 12054.413:   96.6925%  (        7)
00:10:37.819  12054.413 - 12107.052:   96.7655%  (       10)
00:10:37.819  12107.052 - 12159.692:   96.8312%  (        9)
00:10:37.819  12159.692 - 12212.331:   96.8896%  (        8)
00:10:37.819  12212.331 - 12264.970:   96.9553%  (        9)
00:10:37.819  12264.970 - 12317.610:   97.0064%  (        7)
00:10:37.819  12317.610 - 12370.249:   97.0283%  (        3)
00:10:37.819  12370.249 - 12422.888:   97.0648%  (        5)
00:10:37.819  12422.888 - 12475.528:   97.0940%  (        4)
00:10:37.819  12475.528 - 12528.167:   97.1232%  (        4)
00:10:37.819  12528.167 - 12580.806:   97.1598%  (        5)
00:10:37.819  12580.806 - 12633.446:   97.1963%  (        5)
00:10:37.819  12633.446 - 12686.085:   97.2255%  (        4)
00:10:37.819  12686.085 - 12738.724:   97.2693%  (        6)
00:10:37.819  12738.724 - 12791.364:   97.3058%  (        5)
00:10:37.819  12791.364 - 12844.003:   97.3350%  (        4)
00:10:37.819  12844.003 - 12896.643:   97.3715%  (        5)
00:10:37.819  12896.643 - 12949.282:   97.4080%  (        5)
00:10:37.819  12949.282 - 13001.921:   97.4372%  (        4)
00:10:37.819  13001.921 - 13054.561:   97.4737%  (        5)
00:10:37.819  13054.561 - 13107.200:   97.5029%  (        4)
00:10:37.819  13107.200 - 13159.839:   97.5394%  (        5)
00:10:37.819  13159.839 - 13212.479:   97.5613%  (        3)
00:10:37.819  13212.479 - 13265.118:   97.5759%  (        2)
00:10:37.819  13265.118 - 13317.757:   97.5905%  (        2)
00:10:37.819  13317.757 - 13370.397:   97.6124%  (        3)
00:10:37.819  13370.397 - 13423.036:   97.6270%  (        2)
00:10:37.819  13423.036 - 13475.676:   97.6416%  (        2)
00:10:37.819  13475.676 - 13580.954:   97.6636%  (        3)
00:10:37.819  14844.299 - 14949.578:   97.7001%  (        5)
00:10:37.819  14949.578 - 15054.856:   97.7366%  (        5)
00:10:37.819  15054.856 - 15160.135:   97.7731%  (        5)
00:10:37.819  15160.135 - 15265.414:   97.8096%  (        5)
00:10:37.819  15265.414 - 15370.692:   97.8461%  (        5)
00:10:37.819  15370.692 - 15475.971:   97.8826%  (        5)
00:10:37.819  15475.971 - 15581.250:   97.9191%  (        5)
00:10:37.819  15581.250 - 15686.529:   97.9556%  (        5)
00:10:37.819  15686.529 - 15791.807:   97.9921%  (        5)
00:10:37.819  15791.807 - 15897.086:   98.0213%  (        4)
00:10:37.820  15897.086 - 16002.365:   98.0578%  (        5)
00:10:37.820  16002.365 - 16107.643:   98.0943%  (        5)
00:10:37.820  16107.643 - 16212.922:   98.1308%  (        5)
00:10:37.820  16844.594 - 16949.873:   98.1746%  (        6)
00:10:37.820  16949.873 - 17055.152:   98.2331%  (        8)
00:10:37.820  17055.152 - 17160.431:   98.2842%  (        7)
00:10:37.820  17160.431 - 17265.709:   98.3280%  (        6)
00:10:37.820  17265.709 - 17370.988:   98.3791%  (        7)
00:10:37.820  17370.988 - 17476.267:   98.4229%  (        6)
00:10:37.820  17476.267 - 17581.545:   98.4667%  (        6)
00:10:37.820  17581.545 - 17686.824:   98.5178%  (        7)
00:10:37.820  17686.824 - 17792.103:   98.5689%  (        7)
00:10:37.820  17792.103 - 17897.382:   98.6054%  (        5)
00:10:37.820  17897.382 - 18002.660:   98.6492%  (        6)
00:10:37.820  18002.660 - 18107.939:   98.6857%  (        5)
00:10:37.820  18107.939 - 18213.218:   98.7296%  (        6)
00:10:37.820  18213.218 - 18318.496:   98.7734%  (        6)
00:10:37.820  18318.496 - 18423.775:   98.8099%  (        5)
00:10:37.820  18423.775 - 18529.054:   98.8537%  (        6)
00:10:37.820  18529.054 - 18634.333:   98.8975%  (        6)
00:10:37.820  18634.333 - 18739.611:   98.9340%  (        5)
00:10:37.820  18739.611 - 18844.890:   98.9705%  (        5)
00:10:37.820  18844.890 - 18950.169:   99.0070%  (        5)
00:10:37.820  18950.169 - 19055.447:   99.0508%  (        6)
00:10:37.820  19055.447 - 19160.726:   99.0654%  (        2)
00:10:37.820  29056.925 - 29267.483:   99.0873%  (        3)
00:10:37.820  29267.483 - 29478.040:   99.1457%  (        8)
00:10:37.820  29478.040 - 29688.598:   99.2114%  (        9)
00:10:37.820  29688.598 - 29899.155:   99.2699%  (        8)
00:10:37.820  29899.155 - 30109.712:   99.3283%  (        8)
00:10:37.820  30109.712 - 30320.270:   99.3867%  (        8)
00:10:37.820  30320.270 - 30530.827:   99.4524%  (        9)
00:10:37.820  30530.827 - 30741.385:   99.5108%  (        8)
00:10:37.820  30741.385 - 30951.942:   99.5327%  (        3)
00:10:37.820  35373.648 - 35584.206:   99.5473%  (        2)
00:10:37.820  35584.206 - 35794.763:   99.5984%  (        7)
00:10:37.820  35794.763 - 36005.320:   99.6641%  (        9)
00:10:37.820  36005.320 - 36215.878:   99.7152%  (        7)
00:10:37.820  36215.878 - 36426.435:   99.7737%  (        8)
00:10:37.820  36426.435 - 36636.993:   99.8321%  (        8)
00:10:37.820  36636.993 - 36847.550:   99.8905%  (        8)
00:10:37.820  36847.550 - 37058.108:   99.9489%  (        8)
00:10:37.820  37058.108 - 37268.665:  100.0000%  (        7)
00:10:37.820  
00:10:37.820   16:21:06 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:10:39.199  Initializing NVMe Controllers
00:10:39.199  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:10:39.199  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:10:39.199  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:10:39.199  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:10:39.199  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:10:39.199  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:10:39.199  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:10:39.199  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:10:39.199  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:10:39.199  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:10:39.199  Initialization complete. Launching workers.
00:10:39.199  ========================================================
00:10:39.199                                                                             Latency(us)
00:10:39.199  Device Information                     :       IOPS      MiB/s    Average        min        max
00:10:39.199  PCIE (0000:00:10.0) NSID 1 from core  0:   12972.40     152.02    9892.68    7750.91   42947.16
00:10:39.199  PCIE (0000:00:11.0) NSID 1 from core  0:   12972.40     152.02    9878.07    7857.36   41015.09
00:10:39.199  PCIE (0000:00:13.0) NSID 1 from core  0:   12972.40     152.02    9864.25    7624.93   40395.44
00:10:39.199  PCIE (0000:00:12.0) NSID 1 from core  0:   12972.40     152.02    9850.52    7777.18   38597.12
00:10:39.199  PCIE (0000:00:12.0) NSID 2 from core  0:   12972.40     152.02    9836.86    7830.49   36788.18
00:10:39.199  PCIE (0000:00:12.0) NSID 3 from core  0:   13036.30     152.77    9774.48    7741.76   29020.47
00:10:39.199  ========================================================
00:10:39.199  Total                                  :   77898.30     912.87    9849.41    7624.93   42947.16
00:10:39.199  
00:10:39.199  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:10:39.199  =================================================================================
00:10:39.199    1.00000% :  8106.461us
00:10:39.199   10.00000% :  8580.215us
00:10:39.199   25.00000% :  8843.412us
00:10:39.199   50.00000% :  9211.888us
00:10:39.199   75.00000% :  9633.002us
00:10:39.199   90.00000% : 11317.462us
00:10:39.199   95.00000% : 13791.512us
00:10:39.199   98.00000% : 18213.218us
00:10:39.199   99.00000% : 19897.677us
00:10:39.199   99.50000% : 34320.861us
00:10:39.199   99.90000% : 42743.158us
00:10:39.199   99.99000% : 42953.716us
00:10:39.199   99.99900% : 42953.716us
00:10:39.199   99.99990% : 42953.716us
00:10:39.199   99.99999% : 42953.716us
00:10:39.199  
00:10:39.199  Summary latency data for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:10:39.199  =================================================================================
00:10:39.199    1.00000% :  8106.461us
00:10:39.199   10.00000% :  8632.855us
00:10:39.199   25.00000% :  8843.412us
00:10:39.199   50.00000% :  9211.888us
00:10:39.199   75.00000% :  9633.002us
00:10:39.199   90.00000% : 11212.183us
00:10:39.199   95.00000% : 14212.627us
00:10:39.199   98.00000% : 17686.824us
00:10:39.199   99.00000% : 20002.956us
00:10:39.199   99.50000% : 32636.402us
00:10:39.199   99.90000% : 40848.141us
00:10:39.199   99.99000% : 41058.699us
00:10:39.199   99.99900% : 41058.699us
00:10:39.199   99.99990% : 41058.699us
00:10:39.199   99.99999% : 41058.699us
00:10:39.200  
00:10:39.200  Summary latency data for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:10:39.200  =================================================================================
00:10:39.200    1.00000% :  8001.182us
00:10:39.200   10.00000% :  8580.215us
00:10:39.200   25.00000% :  8843.412us
00:10:39.200   50.00000% :  9211.888us
00:10:39.200   75.00000% :  9685.642us
00:10:39.200   90.00000% : 10791.068us
00:10:39.200   95.00000% : 13686.233us
00:10:39.200   98.00000% : 18423.775us
00:10:39.200   99.00000% : 20108.235us
00:10:39.200   99.50000% : 32425.844us
00:10:39.200   99.90000% : 40216.469us
00:10:39.200   99.99000% : 40427.027us
00:10:39.200   99.99900% : 40427.027us
00:10:39.200   99.99990% : 40427.027us
00:10:39.200   99.99999% : 40427.027us
00:10:39.200  
00:10:39.200  Summary latency data for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:10:39.200  =================================================================================
00:10:39.200    1.00000% :  8053.822us
00:10:39.200   10.00000% :  8632.855us
00:10:39.200   25.00000% :  8896.051us
00:10:39.200   50.00000% :  9211.888us
00:10:39.200   75.00000% :  9685.642us
00:10:39.200   90.00000% : 10896.347us
00:10:39.200   95.00000% : 13896.790us
00:10:39.200   98.00000% : 18423.775us
00:10:39.200   99.00000% : 19792.398us
00:10:39.200   99.50000% : 30741.385us
00:10:39.200   99.90000% : 38321.452us
00:10:39.200   99.99000% : 38742.567us
00:10:39.200   99.99900% : 38742.567us
00:10:39.200   99.99990% : 38742.567us
00:10:39.200   99.99999% : 38742.567us
00:10:39.200  
00:10:39.200  Summary latency data for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:10:39.200  =================================================================================
00:10:39.200    1.00000% :  8159.100us
00:10:39.200   10.00000% :  8632.855us
00:10:39.200   25.00000% :  8896.051us
00:10:39.200   50.00000% :  9211.888us
00:10:39.200   75.00000% :  9685.642us
00:10:39.200   90.00000% : 11001.626us
00:10:39.200   95.00000% : 14107.348us
00:10:39.200   98.00000% : 18107.939us
00:10:39.200   99.00000% : 19476.562us
00:10:39.200   99.50000% : 29478.040us
00:10:39.200   99.90000% : 36636.993us
00:10:39.200   99.99000% : 36847.550us
00:10:39.200   99.99900% : 36847.550us
00:10:39.200   99.99990% : 36847.550us
00:10:39.200   99.99999% : 36847.550us
00:10:39.200  
00:10:39.200  Summary latency data for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:10:39.200  =================================================================================
00:10:39.200    1.00000% :  8106.461us
00:10:39.200   10.00000% :  8632.855us
00:10:39.200   25.00000% :  8843.412us
00:10:39.200   50.00000% :  9211.888us
00:10:39.200   75.00000% :  9633.002us
00:10:39.200   90.00000% : 11422.741us
00:10:39.200   95.00000% : 13896.790us
00:10:39.200   98.00000% : 18213.218us
00:10:39.200   99.00000% : 19476.562us
00:10:39.200   99.50000% : 21055.743us
00:10:39.200   99.90000% : 28846.368us
00:10:39.200   99.99000% : 29056.925us
00:10:39.200   99.99900% : 29056.925us
00:10:39.200   99.99990% : 29056.925us
00:10:39.200   99.99999% : 29056.925us
00:10:39.200  
00:10:39.200  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:10:39.200  ==============================================================================
00:10:39.200         Range in us     Cumulative    IO count
00:10:39.200   7737.986 -  7790.625:    0.0154%  (        2)
00:10:39.200   7790.625 -  7843.264:    0.0231%  (        1)
00:10:39.200   7843.264 -  7895.904:    0.0308%  (        1)
00:10:39.200   7895.904 -  7948.543:    0.1001%  (        9)
00:10:39.200   7948.543 -  8001.182:    0.2540%  (       20)
00:10:39.200   8001.182 -  8053.822:    0.6927%  (       57)
00:10:39.200   8053.822 -  8106.461:    1.6087%  (      119)
00:10:39.200   8106.461 -  8159.100:    2.6093%  (      130)
00:10:39.200   8159.100 -  8211.740:    3.4175%  (      105)
00:10:39.200   8211.740 -  8264.379:    3.9948%  (       75)
00:10:39.200   8264.379 -  8317.018:    4.5567%  (       73)
00:10:39.200   8317.018 -  8369.658:    5.2417%  (       89)
00:10:39.200   8369.658 -  8422.297:    6.1192%  (      114)
00:10:39.200   8422.297 -  8474.937:    7.1659%  (      136)
00:10:39.200   8474.937 -  8527.576:    8.6207%  (      189)
00:10:39.200   8527.576 -  8580.215:   10.8836%  (      294)
00:10:39.200   8580.215 -  8632.855:   13.5006%  (      340)
00:10:39.200   8632.855 -  8685.494:   16.3870%  (      375)
00:10:39.200   8685.494 -  8738.133:   19.7352%  (      435)
00:10:39.200   8738.133 -  8790.773:   23.0296%  (      428)
00:10:39.200   8790.773 -  8843.412:   26.9166%  (      505)
00:10:39.200   8843.412 -  8896.051:   30.6188%  (      481)
00:10:39.200   8896.051 -  8948.691:   34.9138%  (      558)
00:10:39.200   8948.691 -  9001.330:   38.3621%  (      448)
00:10:39.200   9001.330 -  9053.969:   41.9412%  (      465)
00:10:39.200   9053.969 -  9106.609:   45.4741%  (      459)
00:10:39.200   9106.609 -  9159.248:   48.5068%  (      394)
00:10:39.200   9159.248 -  9211.888:   51.2931%  (      362)
00:10:39.200   9211.888 -  9264.527:   54.5567%  (      424)
00:10:39.200   9264.527 -  9317.166:   58.1589%  (      468)
00:10:39.200   9317.166 -  9369.806:   62.1613%  (      520)
00:10:39.200   9369.806 -  9422.445:   65.6404%  (      452)
00:10:39.200   9422.445 -  9475.084:   68.5730%  (      381)
00:10:39.200   9475.084 -  9527.724:   71.4748%  (      377)
00:10:39.200   9527.724 -  9580.363:   73.5760%  (      273)
00:10:39.200   9580.363 -  9633.002:   75.0308%  (      189)
00:10:39.200   9633.002 -  9685.642:   76.5086%  (      192)
00:10:39.200   9685.642 -  9738.281:   77.7863%  (      166)
00:10:39.200   9738.281 -  9790.920:   78.7946%  (      131)
00:10:39.200   9790.920 -  9843.560:   80.0416%  (      162)
00:10:39.200   9843.560 -  9896.199:   81.3732%  (      173)
00:10:39.200   9896.199 -  9948.839:   82.3507%  (      127)
00:10:39.200   9948.839 - 10001.478:   83.2281%  (      114)
00:10:39.200  10001.478 - 10054.117:   84.0055%  (      101)
00:10:39.200  10054.117 - 10106.757:   84.7675%  (       99)
00:10:39.200  10106.757 - 10159.396:   85.2448%  (       62)
00:10:39.200  10159.396 - 10212.035:   85.8836%  (       83)
00:10:39.200  10212.035 - 10264.675:   86.3070%  (       55)
00:10:39.200  10264.675 - 10317.314:   86.8381%  (       69)
00:10:39.200  10317.314 - 10369.953:   87.1536%  (       41)
00:10:39.200  10369.953 - 10422.593:   87.7232%  (       74)
00:10:39.200  10422.593 - 10475.232:   88.2620%  (       70)
00:10:39.200  10475.232 - 10527.871:   88.4775%  (       28)
00:10:39.200  10527.871 - 10580.511:   88.6392%  (       21)
00:10:39.200  10580.511 - 10633.150:   88.8316%  (       25)
00:10:39.200  10633.150 - 10685.790:   88.9240%  (       12)
00:10:39.200  10685.790 - 10738.429:   89.0317%  (       14)
00:10:39.200  10738.429 - 10791.068:   89.1164%  (       11)
00:10:39.200  10791.068 - 10843.708:   89.1780%  (        8)
00:10:39.200  10843.708 - 10896.347:   89.2395%  (        8)
00:10:39.200  10896.347 - 10948.986:   89.3011%  (        8)
00:10:39.200  10948.986 - 11001.626:   89.4012%  (       13)
00:10:39.200  11001.626 - 11054.265:   89.5320%  (       17)
00:10:39.200  11054.265 - 11106.904:   89.5936%  (        8)
00:10:39.200  11106.904 - 11159.544:   89.6629%  (        9)
00:10:39.200  11159.544 - 11212.183:   89.7629%  (       13)
00:10:39.200  11212.183 - 11264.822:   89.9784%  (       28)
00:10:39.200  11264.822 - 11317.462:   90.2401%  (       34)
00:10:39.200  11317.462 - 11370.101:   90.5095%  (       35)
00:10:39.200  11370.101 - 11422.741:   90.6635%  (       20)
00:10:39.200  11422.741 - 11475.380:   90.7866%  (       16)
00:10:39.200  11475.380 - 11528.019:   91.1022%  (       41)
00:10:39.200  11528.019 - 11580.659:   91.3408%  (       31)
00:10:39.200  11580.659 - 11633.298:   91.6179%  (       36)
00:10:39.200  11633.298 - 11685.937:   91.7796%  (       21)
00:10:39.200  11685.937 - 11738.577:   91.8950%  (       15)
00:10:39.200  11738.577 - 11791.216:   92.0567%  (       21)
00:10:39.200  11791.216 - 11843.855:   92.2337%  (       23)
00:10:39.200  11843.855 - 11896.495:   92.3491%  (       15)
00:10:39.200  11896.495 - 11949.134:   92.4723%  (       16)
00:10:39.200  11949.134 - 12001.773:   92.5185%  (        6)
00:10:39.200  12001.773 - 12054.413:   92.5724%  (        7)
00:10:39.200  12054.413 - 12107.052:   92.6262%  (        7)
00:10:39.200  12107.052 - 12159.692:   92.7032%  (       10)
00:10:39.200  12159.692 - 12212.331:   92.7340%  (        4)
00:10:39.200  12212.331 - 12264.970:   92.7879%  (        7)
00:10:39.200  12264.970 - 12317.610:   92.8264%  (        5)
00:10:39.200  12317.610 - 12370.249:   92.8956%  (        9)
00:10:39.200  12370.249 - 12422.888:   92.9572%  (        8)
00:10:39.200  12422.888 - 12475.528:   92.9803%  (        3)
00:10:39.200  12475.528 - 12528.167:   93.0111%  (        4)
00:10:39.200  12528.167 - 12580.806:   93.0419%  (        4)
00:10:39.200  12580.806 - 12633.446:   93.0727%  (        4)
00:10:39.200  12633.446 - 12686.085:   93.1111%  (        5)
00:10:39.200  12844.003 - 12896.643:   93.1496%  (        5)
00:10:39.200  12896.643 - 12949.282:   93.3344%  (       24)
00:10:39.200  12949.282 - 13001.921:   93.3651%  (        4)
00:10:39.200  13001.921 - 13054.561:   93.3959%  (        4)
00:10:39.200  13054.561 - 13107.200:   93.4267%  (        4)
00:10:39.200  13107.200 - 13159.839:   93.4498%  (        3)
00:10:39.200  13159.839 - 13212.479:   93.5191%  (        9)
00:10:39.200  13212.479 - 13265.118:   93.7346%  (       28)
00:10:39.200  13265.118 - 13317.757:   93.9270%  (       25)
00:10:39.200  13317.757 - 13370.397:   94.0656%  (       18)
00:10:39.200  13370.397 - 13423.036:   94.2734%  (       27)
00:10:39.200  13423.036 - 13475.676:   94.4735%  (       26)
00:10:39.200  13475.676 - 13580.954:   94.6659%  (       25)
00:10:39.200  13580.954 - 13686.233:   94.8430%  (       23)
00:10:39.200  13686.233 - 13791.512:   95.0585%  (       28)
00:10:39.200  13791.512 - 13896.790:   95.1893%  (       17)
00:10:39.200  13896.790 - 14002.069:   95.3125%  (       16)
00:10:39.200  14002.069 - 14107.348:   95.4510%  (       18)
00:10:39.200  14107.348 - 14212.627:   95.6435%  (       25)
00:10:39.200  14212.627 - 14317.905:   95.7435%  (       13)
00:10:39.200  14317.905 - 14423.184:   95.8359%  (       12)
00:10:39.200  14423.184 - 14528.463:   95.8975%  (        8)
00:10:39.200  14528.463 - 14633.741:   95.9898%  (       12)
00:10:39.200  14633.741 - 14739.020:   96.0668%  (       10)
00:10:39.200  14739.020 - 14844.299:   96.1438%  (       10)
00:10:39.200  14844.299 - 14949.578:   96.2438%  (       13)
00:10:39.200  14949.578 - 15054.856:   96.3439%  (       13)
00:10:39.200  15054.856 - 15160.135:   96.5440%  (       26)
00:10:39.200  15160.135 - 15265.414:   96.5517%  (        1)
00:10:39.200  16528.758 - 16634.037:   96.5594%  (        1)
00:10:39.200  16634.037 - 16739.316:   96.6595%  (       13)
00:10:39.200  16739.316 - 16844.594:   96.8134%  (       20)
00:10:39.200  16844.594 - 16949.873:   97.0751%  (       34)
00:10:39.201  16949.873 - 17055.152:   97.1752%  (       13)
00:10:39.201  17055.152 - 17160.431:   97.2752%  (       13)
00:10:39.201  17160.431 - 17265.709:   97.3599%  (       11)
00:10:39.201  17265.709 - 17370.988:   97.4523%  (       12)
00:10:39.201  17370.988 - 17476.267:   97.5369%  (       11)
00:10:39.201  17476.267 - 17581.545:   97.6370%  (       13)
00:10:39.201  17581.545 - 17686.824:   97.7063%  (        9)
00:10:39.201  17686.824 - 17792.103:   97.7986%  (       12)
00:10:39.201  17792.103 - 17897.382:   97.8679%  (        9)
00:10:39.201  17897.382 - 18002.660:   97.9218%  (        7)
00:10:39.201  18002.660 - 18107.939:   97.9680%  (        6)
00:10:39.201  18107.939 - 18213.218:   98.0142%  (        6)
00:10:39.201  18213.218 - 18318.496:   98.1296%  (       15)
00:10:39.201  18318.496 - 18423.775:   98.2143%  (       11)
00:10:39.201  18423.775 - 18529.054:   98.2605%  (        6)
00:10:39.201  18529.054 - 18634.333:   98.3759%  (       15)
00:10:39.201  18634.333 - 18739.611:   98.4683%  (       12)
00:10:39.201  18739.611 - 18844.890:   98.6684%  (       26)
00:10:39.201  18844.890 - 18950.169:   98.7762%  (       14)
00:10:39.201  18950.169 - 19055.447:   98.8147%  (        5)
00:10:39.201  19055.447 - 19160.726:   98.8685%  (        7)
00:10:39.201  19160.726 - 19266.005:   98.8839%  (        2)
00:10:39.201  19266.005 - 19371.284:   98.9070%  (        3)
00:10:39.201  19371.284 - 19476.562:   98.9301%  (        3)
00:10:39.201  19476.562 - 19581.841:   98.9455%  (        2)
00:10:39.201  19581.841 - 19687.120:   98.9686%  (        3)
00:10:39.201  19687.120 - 19792.398:   98.9917%  (        3)
00:10:39.201  19792.398 - 19897.677:   99.0148%  (        3)
00:10:39.201  32215.287 - 32425.844:   99.0379%  (        3)
00:10:39.201  32425.844 - 32636.402:   99.0994%  (        8)
00:10:39.201  32636.402 - 32846.959:   99.1456%  (        6)
00:10:39.201  32846.959 - 33057.516:   99.2072%  (        8)
00:10:39.201  33057.516 - 33268.074:   99.2611%  (        7)
00:10:39.201  33268.074 - 33478.631:   99.3227%  (        8)
00:10:39.201  33478.631 - 33689.189:   99.3842%  (        8)
00:10:39.201  33689.189 - 33899.746:   99.4458%  (        8)
00:10:39.201  33899.746 - 34110.304:   99.4997%  (        7)
00:10:39.201  34110.304 - 34320.861:   99.5074%  (        1)
00:10:39.201  41058.699 - 41269.256:   99.5382%  (        4)
00:10:39.201  41269.256 - 41479.814:   99.5998%  (        8)
00:10:39.201  41479.814 - 41690.371:   99.6613%  (        8)
00:10:39.201  41690.371 - 41900.929:   99.7152%  (        7)
00:10:39.201  41900.929 - 42111.486:   99.7691%  (        7)
00:10:39.201  42111.486 - 42322.043:   99.8307%  (        8)
00:10:39.201  42322.043 - 42532.601:   99.8845%  (        7)
00:10:39.201  42532.601 - 42743.158:   99.9538%  (        9)
00:10:39.201  42743.158 - 42953.716:  100.0000%  (        6)
00:10:39.201  
00:10:39.201  Latency histogram for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:10:39.201  ==============================================================================
00:10:39.201         Range in us     Cumulative    IO count
00:10:39.201   7843.264 -  7895.904:    0.0539%  (        7)
00:10:39.201   7895.904 -  7948.543:    0.1308%  (       10)
00:10:39.201   7948.543 -  8001.182:    0.2617%  (       17)
00:10:39.201   8001.182 -  8053.822:    0.9852%  (       94)
00:10:39.201   8053.822 -  8106.461:    1.2700%  (       37)
00:10:39.201   8106.461 -  8159.100:    1.9397%  (       87)
00:10:39.201   8159.100 -  8211.740:    2.3630%  (       55)
00:10:39.201   8211.740 -  8264.379:    3.2174%  (      111)
00:10:39.201   8264.379 -  8317.018:    3.7485%  (       69)
00:10:39.201   8317.018 -  8369.658:    4.4951%  (       97)
00:10:39.201   8369.658 -  8422.297:    5.4341%  (      122)
00:10:39.201   8422.297 -  8474.937:    6.2500%  (      106)
00:10:39.201   8474.937 -  8527.576:    7.3276%  (      140)
00:10:39.201   8527.576 -  8580.215:    9.1210%  (      233)
00:10:39.201   8580.215 -  8632.855:   11.3916%  (      295)
00:10:39.201   8632.855 -  8685.494:   14.5705%  (      413)
00:10:39.201   8685.494 -  8738.133:   17.8571%  (      427)
00:10:39.201   8738.133 -  8790.773:   21.6749%  (      496)
00:10:39.201   8790.773 -  8843.412:   25.6619%  (      518)
00:10:39.201   8843.412 -  8896.051:   29.9415%  (      556)
00:10:39.201   8896.051 -  8948.691:   33.6438%  (      481)
00:10:39.201   8948.691 -  9001.330:   37.4538%  (      495)
00:10:39.201   9001.330 -  9053.969:   41.2177%  (      489)
00:10:39.201   9053.969 -  9106.609:   45.5126%  (      558)
00:10:39.201   9106.609 -  9159.248:   49.6075%  (      532)
00:10:39.201   9159.248 -  9211.888:   53.9486%  (      564)
00:10:39.201   9211.888 -  9264.527:   58.1204%  (      542)
00:10:39.201   9264.527 -  9317.166:   61.5994%  (      452)
00:10:39.201   9317.166 -  9369.806:   64.7321%  (      407)
00:10:39.201   9369.806 -  9422.445:   67.4415%  (      352)
00:10:39.201   9422.445 -  9475.084:   69.6736%  (      290)
00:10:39.201   9475.084 -  9527.724:   71.6518%  (      257)
00:10:39.201   9527.724 -  9580.363:   73.9763%  (      302)
00:10:39.201   9580.363 -  9633.002:   75.8544%  (      244)
00:10:39.201   9633.002 -  9685.642:   77.4400%  (      206)
00:10:39.201   9685.642 -  9738.281:   78.6022%  (      151)
00:10:39.201   9738.281 -  9790.920:   80.0108%  (      183)
00:10:39.201   9790.920 -  9843.560:   80.8036%  (      103)
00:10:39.201   9843.560 -  9896.199:   81.5194%  (       93)
00:10:39.201   9896.199 -  9948.839:   82.2583%  (       96)
00:10:39.201   9948.839 - 10001.478:   83.1820%  (      120)
00:10:39.201  10001.478 - 10054.117:   84.0825%  (      117)
00:10:39.201  10054.117 - 10106.757:   84.8291%  (       97)
00:10:39.201  10106.757 - 10159.396:   85.5988%  (      100)
00:10:39.201  10159.396 - 10212.035:   86.2608%  (       86)
00:10:39.201  10212.035 - 10264.675:   86.9612%  (       91)
00:10:39.201  10264.675 - 10317.314:   87.6308%  (       87)
00:10:39.201  10317.314 - 10369.953:   87.9695%  (       44)
00:10:39.201  10369.953 - 10422.593:   88.2928%  (       42)
00:10:39.201  10422.593 - 10475.232:   88.5314%  (       31)
00:10:39.201  10475.232 - 10527.871:   88.7469%  (       28)
00:10:39.201  10527.871 - 10580.511:   89.0394%  (       38)
00:10:39.201  10580.511 - 10633.150:   89.1933%  (       20)
00:10:39.201  10633.150 - 10685.790:   89.3165%  (       16)
00:10:39.201  10685.790 - 10738.429:   89.4012%  (       11)
00:10:39.201  10738.429 - 10791.068:   89.5089%  (       14)
00:10:39.201  10791.068 - 10843.708:   89.5397%  (        4)
00:10:39.201  10843.708 - 10896.347:   89.5859%  (        6)
00:10:39.201  10896.347 - 10948.986:   89.6090%  (        3)
00:10:39.201  10948.986 - 11001.626:   89.6783%  (        9)
00:10:39.201  11001.626 - 11054.265:   89.7629%  (       11)
00:10:39.201  11054.265 - 11106.904:   89.8168%  (        7)
00:10:39.201  11106.904 - 11159.544:   89.8784%  (        8)
00:10:39.201  11159.544 - 11212.183:   90.0092%  (       17)
00:10:39.201  11212.183 - 11264.822:   90.1093%  (       13)
00:10:39.201  11264.822 - 11317.462:   90.2171%  (       14)
00:10:39.201  11317.462 - 11370.101:   90.3479%  (       17)
00:10:39.201  11370.101 - 11422.741:   90.4942%  (       19)
00:10:39.201  11422.741 - 11475.380:   90.6019%  (       14)
00:10:39.201  11475.380 - 11528.019:   90.7174%  (       15)
00:10:39.201  11528.019 - 11580.659:   90.8405%  (       16)
00:10:39.201  11580.659 - 11633.298:   91.0175%  (       23)
00:10:39.201  11633.298 - 11685.937:   91.1484%  (       17)
00:10:39.201  11685.937 - 11738.577:   91.2408%  (       12)
00:10:39.201  11738.577 - 11791.216:   91.4101%  (       22)
00:10:39.201  11791.216 - 11843.855:   91.5102%  (       13)
00:10:39.201  11843.855 - 11896.495:   91.5948%  (       11)
00:10:39.201  11896.495 - 11949.134:   91.6872%  (       12)
00:10:39.201  11949.134 - 12001.773:   91.7719%  (       11)
00:10:39.201  12001.773 - 12054.413:   91.8565%  (       11)
00:10:39.201  12054.413 - 12107.052:   91.9643%  (       14)
00:10:39.201  12107.052 - 12159.692:   92.0797%  (       15)
00:10:39.201  12159.692 - 12212.331:   92.2106%  (       17)
00:10:39.201  12212.331 - 12264.970:   92.4030%  (       25)
00:10:39.201  12264.970 - 12317.610:   92.5262%  (       16)
00:10:39.201  12317.610 - 12370.249:   92.5954%  (        9)
00:10:39.201  12370.249 - 12422.888:   92.6724%  (       10)
00:10:39.201  12422.888 - 12475.528:   92.7571%  (       11)
00:10:39.201  12475.528 - 12528.167:   92.8571%  (       13)
00:10:39.201  12528.167 - 12580.806:   92.9418%  (       11)
00:10:39.201  12580.806 - 12633.446:   92.9880%  (        6)
00:10:39.201  12633.446 - 12686.085:   93.0265%  (        5)
00:10:39.201  12686.085 - 12738.724:   93.0727%  (        6)
00:10:39.201  12738.724 - 12791.364:   93.2112%  (       18)
00:10:39.201  12791.364 - 12844.003:   93.3575%  (       19)
00:10:39.201  12844.003 - 12896.643:   93.4960%  (       18)
00:10:39.201  12896.643 - 12949.282:   93.5961%  (       13)
00:10:39.201  12949.282 - 13001.921:   93.6884%  (       12)
00:10:39.201  13001.921 - 13054.561:   93.8270%  (       18)
00:10:39.201  13054.561 - 13107.200:   93.9039%  (       10)
00:10:39.201  13107.200 - 13159.839:   93.9501%  (        6)
00:10:39.201  13159.839 - 13212.479:   93.9732%  (        3)
00:10:39.201  13212.479 - 13265.118:   94.0271%  (        7)
00:10:39.201  13265.118 - 13317.757:   94.0656%  (        5)
00:10:39.201  13317.757 - 13370.397:   94.1502%  (       11)
00:10:39.201  13370.397 - 13423.036:   94.2195%  (        9)
00:10:39.201  13423.036 - 13475.676:   94.3042%  (       11)
00:10:39.201  13475.676 - 13580.954:   94.5043%  (       26)
00:10:39.201  13580.954 - 13686.233:   94.6275%  (       16)
00:10:39.201  13686.233 - 13791.512:   94.7275%  (       13)
00:10:39.201  13791.512 - 13896.790:   94.8276%  (       13)
00:10:39.201  13896.790 - 14002.069:   94.8815%  (        7)
00:10:39.201  14002.069 - 14107.348:   94.9507%  (        9)
00:10:39.201  14107.348 - 14212.627:   95.0508%  (       13)
00:10:39.201  14212.627 - 14317.905:   95.2201%  (       22)
00:10:39.201  14317.905 - 14423.184:   95.5280%  (       40)
00:10:39.201  14423.184 - 14528.463:   95.6512%  (       16)
00:10:39.201  14528.463 - 14633.741:   95.7204%  (        9)
00:10:39.201  14633.741 - 14739.020:   95.8205%  (       13)
00:10:39.201  14739.020 - 14844.299:   95.9206%  (       13)
00:10:39.201  14844.299 - 14949.578:   96.0360%  (       15)
00:10:39.201  14949.578 - 15054.856:   96.1284%  (       12)
00:10:39.201  15054.856 - 15160.135:   96.2054%  (       10)
00:10:39.201  15160.135 - 15265.414:   96.2900%  (       11)
00:10:39.201  15265.414 - 15370.692:   96.3439%  (        7)
00:10:39.201  15370.692 - 15475.971:   96.3978%  (        7)
00:10:39.201  15475.971 - 15581.250:   96.4517%  (        7)
00:10:39.201  15581.250 - 15686.529:   96.5055%  (        7)
00:10:39.201  15686.529 - 15791.807:   96.5517%  (        6)
00:10:39.201  16212.922 - 16318.201:   96.6518%  (       13)
00:10:39.201  16318.201 - 16423.480:   96.8134%  (       21)
00:10:39.201  16423.480 - 16528.758:   96.9289%  (       15)
00:10:39.201  16528.758 - 16634.037:   96.9905%  (        8)
00:10:39.201  16634.037 - 16739.316:   97.0520%  (        8)
00:10:39.201  16739.316 - 16844.594:   97.1213%  (        9)
00:10:39.201  16844.594 - 16949.873:   97.2137%  (       12)
00:10:39.201  16949.873 - 17055.152:   97.4061%  (       25)
00:10:39.201  17055.152 - 17160.431:   97.5831%  (       23)
00:10:39.201  17160.431 - 17265.709:   97.7602%  (       23)
00:10:39.201  17265.709 - 17370.988:   97.8525%  (       12)
00:10:39.201  17370.988 - 17476.267:   97.9064%  (        7)
00:10:39.201  17476.267 - 17581.545:   97.9680%  (        8)
00:10:39.201  17581.545 - 17686.824:   98.0219%  (        7)
00:10:39.202  17686.824 - 17792.103:   98.0296%  (        1)
00:10:39.202  18107.939 - 18213.218:   98.0526%  (        3)
00:10:39.202  18213.218 - 18318.496:   98.1835%  (       17)
00:10:39.202  18318.496 - 18423.775:   98.2066%  (        3)
00:10:39.202  18423.775 - 18529.054:   98.2220%  (        2)
00:10:39.202  18529.054 - 18634.333:   98.2605%  (        5)
00:10:39.202  18634.333 - 18739.611:   98.4375%  (       23)
00:10:39.202  18739.611 - 18844.890:   98.4837%  (        6)
00:10:39.202  18844.890 - 18950.169:   98.5376%  (        7)
00:10:39.202  18950.169 - 19055.447:   98.6222%  (       11)
00:10:39.202  19055.447 - 19160.726:   98.6992%  (       10)
00:10:39.202  19160.726 - 19266.005:   98.7839%  (       11)
00:10:39.202  19266.005 - 19371.284:   98.8608%  (       10)
00:10:39.202  19371.284 - 19476.562:   98.8993%  (        5)
00:10:39.202  19476.562 - 19581.841:   98.9224%  (        3)
00:10:39.202  19581.841 - 19687.120:   98.9532%  (        4)
00:10:39.202  19687.120 - 19792.398:   98.9763%  (        3)
00:10:39.202  19792.398 - 19897.677:   98.9994%  (        3)
00:10:39.202  19897.677 - 20002.956:   99.0148%  (        2)
00:10:39.202  30951.942 - 31162.500:   99.0687%  (        7)
00:10:39.202  31162.500 - 31373.057:   99.1302%  (        8)
00:10:39.202  31373.057 - 31583.614:   99.1918%  (        8)
00:10:39.202  31583.614 - 31794.172:   99.2611%  (        9)
00:10:39.202  31794.172 - 32004.729:   99.3227%  (        8)
00:10:39.202  32004.729 - 32215.287:   99.3842%  (        8)
00:10:39.202  32215.287 - 32425.844:   99.4458%  (        8)
00:10:39.202  32425.844 - 32636.402:   99.5074%  (        8)
00:10:39.202  39374.239 - 39584.797:   99.5690%  (        8)
00:10:39.202  39584.797 - 39795.354:   99.6305%  (        8)
00:10:39.202  39795.354 - 40005.912:   99.6921%  (        8)
00:10:39.202  40005.912 - 40216.469:   99.7614%  (        9)
00:10:39.202  40216.469 - 40427.027:   99.8230%  (        8)
00:10:39.202  40427.027 - 40637.584:   99.8768%  (        7)
00:10:39.202  40637.584 - 40848.141:   99.9461%  (        9)
00:10:39.202  40848.141 - 41058.699:  100.0000%  (        7)
00:10:39.202  
00:10:39.202  Latency histogram for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:10:39.202  ==============================================================================
00:10:39.202         Range in us     Cumulative    IO count
00:10:39.202   7580.067 -  7632.707:    0.0077%  (        1)
00:10:39.202   7632.707 -  7685.346:    0.0154%  (        1)
00:10:39.202   7685.346 -  7737.986:    0.0693%  (        7)
00:10:39.202   7737.986 -  7790.625:    0.1462%  (       10)
00:10:39.202   7790.625 -  7843.264:    0.2232%  (       10)
00:10:39.202   7843.264 -  7895.904:    0.4079%  (       24)
00:10:39.202   7895.904 -  7948.543:    0.6850%  (       36)
00:10:39.202   7948.543 -  8001.182:    1.1469%  (       60)
00:10:39.202   8001.182 -  8053.822:    1.5471%  (       52)
00:10:39.202   8053.822 -  8106.461:    2.0320%  (       63)
00:10:39.202   8106.461 -  8159.100:    2.3399%  (       40)
00:10:39.202   8159.100 -  8211.740:    2.8017%  (       60)
00:10:39.202   8211.740 -  8264.379:    3.4406%  (       83)
00:10:39.202   8264.379 -  8317.018:    4.3565%  (      119)
00:10:39.202   8317.018 -  8369.658:    5.2879%  (      121)
00:10:39.202   8369.658 -  8422.297:    6.5887%  (      169)
00:10:39.202   8422.297 -  8474.937:    7.9126%  (      172)
00:10:39.202   8474.937 -  8527.576:    9.5982%  (      219)
00:10:39.202   8527.576 -  8580.215:   11.2454%  (      214)
00:10:39.202   8580.215 -  8632.855:   13.3236%  (      270)
00:10:39.202   8632.855 -  8685.494:   16.2331%  (      378)
00:10:39.202   8685.494 -  8738.133:   19.5351%  (      429)
00:10:39.202   8738.133 -  8790.773:   23.6145%  (      530)
00:10:39.202   8790.773 -  8843.412:   27.2629%  (      474)
00:10:39.202   8843.412 -  8896.051:   30.9729%  (      482)
00:10:39.202   8896.051 -  8948.691:   35.2525%  (      556)
00:10:39.202   8948.691 -  9001.330:   39.1472%  (      506)
00:10:39.202   9001.330 -  9053.969:   42.3183%  (      412)
00:10:39.202   9053.969 -  9106.609:   45.5126%  (      415)
00:10:39.202   9106.609 -  9159.248:   48.3374%  (      367)
00:10:39.202   9159.248 -  9211.888:   51.5317%  (      415)
00:10:39.202   9211.888 -  9264.527:   54.2796%  (      357)
00:10:39.202   9264.527 -  9317.166:   57.7971%  (      457)
00:10:39.202   9317.166 -  9369.806:   61.0376%  (      421)
00:10:39.202   9369.806 -  9422.445:   64.1318%  (      402)
00:10:39.202   9422.445 -  9475.084:   67.5800%  (      448)
00:10:39.202   9475.084 -  9527.724:   70.2124%  (      342)
00:10:39.202   9527.724 -  9580.363:   72.2829%  (      269)
00:10:39.202   9580.363 -  9633.002:   74.2226%  (      252)
00:10:39.202   9633.002 -  9685.642:   75.8621%  (      213)
00:10:39.202   9685.642 -  9738.281:   77.5323%  (      217)
00:10:39.202   9738.281 -  9790.920:   79.0256%  (      194)
00:10:39.202   9790.920 -  9843.560:   80.7420%  (      223)
00:10:39.202   9843.560 -  9896.199:   81.8735%  (      147)
00:10:39.202   9896.199 -  9948.839:   83.1050%  (      160)
00:10:39.202   9948.839 - 10001.478:   83.9825%  (      114)
00:10:39.202  10001.478 - 10054.117:   84.8830%  (      117)
00:10:39.202  10054.117 - 10106.757:   85.5373%  (       85)
00:10:39.202  10106.757 - 10159.396:   86.5764%  (      135)
00:10:39.202  10159.396 - 10212.035:   87.1459%  (       74)
00:10:39.202  10212.035 - 10264.675:   87.6847%  (       70)
00:10:39.202  10264.675 - 10317.314:   88.1081%  (       55)
00:10:39.202  10317.314 - 10369.953:   88.4467%  (       44)
00:10:39.202  10369.953 - 10422.593:   88.8316%  (       50)
00:10:39.202  10422.593 - 10475.232:   89.1318%  (       39)
00:10:39.202  10475.232 - 10527.871:   89.3858%  (       33)
00:10:39.202  10527.871 - 10580.511:   89.5859%  (       26)
00:10:39.202  10580.511 - 10633.150:   89.7783%  (       25)
00:10:39.202  10633.150 - 10685.790:   89.8861%  (       14)
00:10:39.202  10685.790 - 10738.429:   89.9938%  (       14)
00:10:39.202  10738.429 - 10791.068:   90.1016%  (       14)
00:10:39.202  10791.068 - 10843.708:   90.1709%  (        9)
00:10:39.202  10843.708 - 10896.347:   90.2325%  (        8)
00:10:39.202  10896.347 - 10948.986:   90.3017%  (        9)
00:10:39.202  10948.986 - 11001.626:   90.3479%  (        6)
00:10:39.202  11001.626 - 11054.265:   90.4249%  (       10)
00:10:39.202  11054.265 - 11106.904:   90.5788%  (       20)
00:10:39.202  11106.904 - 11159.544:   90.6327%  (        7)
00:10:39.202  11159.544 - 11212.183:   90.6943%  (        8)
00:10:39.202  11212.183 - 11264.822:   90.8174%  (       16)
00:10:39.202  11264.822 - 11317.462:   90.9483%  (       17)
00:10:39.202  11317.462 - 11370.101:   91.0329%  (       11)
00:10:39.202  11370.101 - 11422.741:   91.1407%  (       14)
00:10:39.202  11422.741 - 11475.380:   91.3023%  (       21)
00:10:39.202  11475.380 - 11528.019:   91.4640%  (       21)
00:10:39.202  11528.019 - 11580.659:   91.5333%  (        9)
00:10:39.202  11580.659 - 11633.298:   91.5871%  (        7)
00:10:39.202  11633.298 - 11685.937:   91.6564%  (        9)
00:10:39.202  11685.937 - 11738.577:   91.7642%  (       14)
00:10:39.202  11738.577 - 11791.216:   91.8796%  (       15)
00:10:39.202  11791.216 - 11843.855:   91.9720%  (       12)
00:10:39.202  11843.855 - 11896.495:   92.0567%  (       11)
00:10:39.202  11896.495 - 11949.134:   92.1336%  (       10)
00:10:39.202  11949.134 - 12001.773:   92.1798%  (        6)
00:10:39.202  12001.773 - 12054.413:   92.2183%  (        5)
00:10:39.202  12054.413 - 12107.052:   92.2491%  (        4)
00:10:39.202  12107.052 - 12159.692:   92.2722%  (        3)
00:10:39.202  12159.692 - 12212.331:   92.2953%  (        3)
00:10:39.202  12212.331 - 12264.970:   92.3107%  (        2)
00:10:39.202  12264.970 - 12317.610:   92.3260%  (        2)
00:10:39.202  12317.610 - 12370.249:   92.3799%  (        7)
00:10:39.202  12370.249 - 12422.888:   92.4492%  (        9)
00:10:39.202  12422.888 - 12475.528:   92.5262%  (       10)
00:10:39.202  12475.528 - 12528.167:   92.6570%  (       17)
00:10:39.202  12528.167 - 12580.806:   92.9495%  (       38)
00:10:39.202  12580.806 - 12633.446:   93.1265%  (       23)
00:10:39.202  12633.446 - 12686.085:   93.2189%  (       12)
00:10:39.202  12686.085 - 12738.724:   93.2728%  (        7)
00:10:39.202  12738.724 - 12791.364:   93.3651%  (       12)
00:10:39.202  12791.364 - 12844.003:   93.4652%  (       13)
00:10:39.202  12844.003 - 12896.643:   93.5422%  (       10)
00:10:39.202  12896.643 - 12949.282:   93.6422%  (       13)
00:10:39.202  12949.282 - 13001.921:   93.7654%  (       16)
00:10:39.202  13001.921 - 13054.561:   93.9039%  (       18)
00:10:39.202  13054.561 - 13107.200:   94.1041%  (       26)
00:10:39.202  13107.200 - 13159.839:   94.4273%  (       42)
00:10:39.202  13159.839 - 13212.479:   94.6121%  (       24)
00:10:39.202  13212.479 - 13265.118:   94.7121%  (       13)
00:10:39.202  13265.118 - 13317.757:   94.7814%  (        9)
00:10:39.202  13317.757 - 13370.397:   94.8430%  (        8)
00:10:39.202  13370.397 - 13423.036:   94.8661%  (        3)
00:10:39.202  13423.036 - 13475.676:   94.9046%  (        5)
00:10:39.202  13475.676 - 13580.954:   94.9507%  (        6)
00:10:39.202  13580.954 - 13686.233:   95.0123%  (        8)
00:10:39.202  13686.233 - 13791.512:   95.0431%  (        4)
00:10:39.202  13791.512 - 13896.790:   95.0662%  (        3)
00:10:39.202  13896.790 - 14002.069:   95.0739%  (        1)
00:10:39.202  14423.184 - 14528.463:   95.0816%  (        1)
00:10:39.202  14528.463 - 14633.741:   95.1201%  (        5)
00:10:39.202  14633.741 - 14739.020:   95.1586%  (        5)
00:10:39.202  14739.020 - 14844.299:   95.1893%  (        4)
00:10:39.202  14844.299 - 14949.578:   95.2201%  (        4)
00:10:39.202  14949.578 - 15054.856:   95.2509%  (        4)
00:10:39.202  15054.856 - 15160.135:   95.4280%  (       23)
00:10:39.202  15160.135 - 15265.414:   95.5665%  (       18)
00:10:39.202  15265.414 - 15370.692:   95.7281%  (       21)
00:10:39.202  15370.692 - 15475.971:   95.8513%  (       16)
00:10:39.202  15475.971 - 15581.250:   96.0129%  (       21)
00:10:39.202  15581.250 - 15686.529:   96.1207%  (       14)
00:10:39.202  15686.529 - 15791.807:   96.2438%  (       16)
00:10:39.202  15791.807 - 15897.086:   96.4517%  (       27)
00:10:39.202  15897.086 - 16002.365:   96.6518%  (       26)
00:10:39.202  16002.365 - 16107.643:   96.9982%  (       45)
00:10:39.202  16107.643 - 16212.922:   97.1752%  (       23)
00:10:39.202  16212.922 - 16318.201:   97.3445%  (       22)
00:10:39.202  16318.201 - 16423.480:   97.4292%  (       11)
00:10:39.202  16423.480 - 16528.758:   97.4677%  (        5)
00:10:39.202  16528.758 - 16634.037:   97.4985%  (        4)
00:10:39.202  16634.037 - 16739.316:   97.5292%  (        4)
00:10:39.202  16739.316 - 16844.594:   97.5369%  (        1)
00:10:39.202  17686.824 - 17792.103:   97.5446%  (        1)
00:10:39.202  17792.103 - 17897.382:   97.5523%  (        1)
00:10:39.202  17897.382 - 18002.660:   97.7140%  (       21)
00:10:39.202  18002.660 - 18107.939:   97.8371%  (       16)
00:10:39.202  18107.939 - 18213.218:   97.8987%  (        8)
00:10:39.202  18213.218 - 18318.496:   97.9218%  (        3)
00:10:39.202  18318.496 - 18423.775:   98.0142%  (       12)
00:10:39.202  18423.775 - 18529.054:   98.1604%  (       19)
00:10:39.202  18529.054 - 18634.333:   98.2528%  (       12)
00:10:39.202  18634.333 - 18739.611:   98.3297%  (       10)
00:10:39.202  18739.611 - 18844.890:   98.4067%  (       10)
00:10:39.202  18844.890 - 18950.169:   98.4760%  (        9)
00:10:39.202  18950.169 - 19055.447:   98.5453%  (        9)
00:10:39.202  19055.447 - 19160.726:   98.6222%  (       10)
00:10:39.202  19160.726 - 19266.005:   98.6915%  (        9)
00:10:39.203  19266.005 - 19371.284:   98.7531%  (        8)
00:10:39.203  19371.284 - 19476.562:   98.8147%  (        8)
00:10:39.203  19476.562 - 19581.841:   98.8762%  (        8)
00:10:39.203  19581.841 - 19687.120:   98.8993%  (        3)
00:10:39.203  19687.120 - 19792.398:   98.9301%  (        4)
00:10:39.203  19792.398 - 19897.677:   98.9532%  (        3)
00:10:39.203  19897.677 - 20002.956:   98.9763%  (        3)
00:10:39.203  20002.956 - 20108.235:   99.0071%  (        4)
00:10:39.203  20108.235 - 20213.513:   99.0148%  (        1)
00:10:39.203  30530.827 - 30741.385:   99.0456%  (        4)
00:10:39.203  30741.385 - 30951.942:   99.0994%  (        7)
00:10:39.203  30951.942 - 31162.500:   99.1687%  (        9)
00:10:39.203  31162.500 - 31373.057:   99.2303%  (        8)
00:10:39.203  31373.057 - 31583.614:   99.2919%  (        8)
00:10:39.203  31583.614 - 31794.172:   99.3611%  (        9)
00:10:39.203  31794.172 - 32004.729:   99.4304%  (        9)
00:10:39.203  32004.729 - 32215.287:   99.4920%  (        8)
00:10:39.203  32215.287 - 32425.844:   99.5074%  (        2)
00:10:39.203  38532.010 - 38742.567:   99.5305%  (        3)
00:10:39.203  38742.567 - 38953.124:   99.5921%  (        8)
00:10:39.203  38953.124 - 39163.682:   99.6459%  (        7)
00:10:39.203  39163.682 - 39374.239:   99.6998%  (        7)
00:10:39.203  39374.239 - 39584.797:   99.7614%  (        8)
00:10:39.203  39584.797 - 39795.354:   99.8230%  (        8)
00:10:39.203  39795.354 - 40005.912:   99.8845%  (        8)
00:10:39.203  40005.912 - 40216.469:   99.9461%  (        8)
00:10:39.203  40216.469 - 40427.027:  100.0000%  (        7)
00:10:39.203  
00:10:39.203  Latency histogram for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:10:39.203  ==============================================================================
00:10:39.203         Range in us     Cumulative    IO count
00:10:39.203   7737.986 -  7790.625:    0.0154%  (        2)
00:10:39.203   7790.625 -  7843.264:    0.0693%  (        7)
00:10:39.203   7843.264 -  7895.904:    0.2155%  (       19)
00:10:39.203   7895.904 -  7948.543:    0.4079%  (       25)
00:10:39.203   7948.543 -  8001.182:    0.8698%  (       60)
00:10:39.203   8001.182 -  8053.822:    1.2161%  (       45)
00:10:39.203   8053.822 -  8106.461:    1.7626%  (       71)
00:10:39.203   8106.461 -  8159.100:    2.5169%  (       98)
00:10:39.203   8159.100 -  8211.740:    3.1404%  (       81)
00:10:39.203   8211.740 -  8264.379:    3.8639%  (       94)
00:10:39.203   8264.379 -  8317.018:    4.3257%  (       60)
00:10:39.203   8317.018 -  8369.658:    4.9184%  (       77)
00:10:39.203   8369.658 -  8422.297:    5.5265%  (       79)
00:10:39.203   8422.297 -  8474.937:    6.3193%  (      103)
00:10:39.203   8474.937 -  8527.576:    7.4353%  (      145)
00:10:39.203   8527.576 -  8580.215:    8.8131%  (      179)
00:10:39.203   8580.215 -  8632.855:   10.9529%  (      278)
00:10:39.203   8632.855 -  8685.494:   13.6623%  (      352)
00:10:39.203   8685.494 -  8738.133:   16.8411%  (      413)
00:10:39.203   8738.133 -  8790.773:   20.4280%  (      466)
00:10:39.203   8790.773 -  8843.412:   24.6767%  (      552)
00:10:39.203   8843.412 -  8896.051:   29.0102%  (      563)
00:10:39.203   8896.051 -  8948.691:   33.0973%  (      531)
00:10:39.203   8948.691 -  9001.330:   37.3461%  (      552)
00:10:39.203   9001.330 -  9053.969:   41.4563%  (      534)
00:10:39.203   9053.969 -  9106.609:   45.0200%  (      463)
00:10:39.203   9106.609 -  9159.248:   49.0917%  (      529)
00:10:39.203   9159.248 -  9211.888:   52.4169%  (      432)
00:10:39.203   9211.888 -  9264.527:   55.8267%  (      443)
00:10:39.203   9264.527 -  9317.166:   58.9055%  (      400)
00:10:39.203   9317.166 -  9369.806:   62.0305%  (      406)
00:10:39.203   9369.806 -  9422.445:   64.9477%  (      379)
00:10:39.203   9422.445 -  9475.084:   67.7032%  (      358)
00:10:39.203   9475.084 -  9527.724:   70.2278%  (      328)
00:10:39.203   9527.724 -  9580.363:   72.4908%  (      294)
00:10:39.203   9580.363 -  9633.002:   74.6075%  (      275)
00:10:39.203   9633.002 -  9685.642:   76.5394%  (      251)
00:10:39.203   9685.642 -  9738.281:   78.3867%  (      240)
00:10:39.203   9738.281 -  9790.920:   80.2725%  (      245)
00:10:39.203   9790.920 -  9843.560:   81.7041%  (      186)
00:10:39.203   9843.560 -  9896.199:   82.8356%  (      147)
00:10:39.203   9896.199 -  9948.839:   83.7977%  (      125)
00:10:39.203   9948.839 - 10001.478:   84.7137%  (      119)
00:10:39.203  10001.478 - 10054.117:   85.7451%  (      134)
00:10:39.203  10054.117 - 10106.757:   86.5994%  (      111)
00:10:39.203  10106.757 - 10159.396:   87.0998%  (       65)
00:10:39.203  10159.396 - 10212.035:   87.6539%  (       72)
00:10:39.203  10212.035 - 10264.675:   88.1004%  (       58)
00:10:39.203  10264.675 - 10317.314:   88.3775%  (       36)
00:10:39.203  10317.314 - 10369.953:   88.5314%  (       20)
00:10:39.203  10369.953 - 10422.593:   88.7392%  (       27)
00:10:39.203  10422.593 - 10475.232:   88.8778%  (       18)
00:10:39.203  10475.232 - 10527.871:   88.9855%  (       14)
00:10:39.203  10527.871 - 10580.511:   89.1241%  (       18)
00:10:39.203  10580.511 - 10633.150:   89.2318%  (       14)
00:10:39.203  10633.150 - 10685.790:   89.3704%  (       18)
00:10:39.203  10685.790 - 10738.429:   89.5397%  (       22)
00:10:39.203  10738.429 - 10791.068:   89.6706%  (       17)
00:10:39.203  10791.068 - 10843.708:   89.8861%  (       28)
00:10:39.203  10843.708 - 10896.347:   90.0631%  (       23)
00:10:39.203  10896.347 - 10948.986:   90.2632%  (       26)
00:10:39.203  10948.986 - 11001.626:   90.4326%  (       22)
00:10:39.203  11001.626 - 11054.265:   90.5557%  (       16)
00:10:39.203  11054.265 - 11106.904:   90.6558%  (       13)
00:10:39.203  11106.904 - 11159.544:   90.7789%  (       16)
00:10:39.203  11159.544 - 11212.183:   90.9175%  (       18)
00:10:39.203  11212.183 - 11264.822:   91.0099%  (       12)
00:10:39.203  11264.822 - 11317.462:   91.0868%  (       10)
00:10:39.203  11317.462 - 11370.101:   91.1407%  (        7)
00:10:39.203  11370.101 - 11422.741:   91.2023%  (        8)
00:10:39.203  11422.741 - 11475.380:   91.3331%  (       17)
00:10:39.203  11475.380 - 11528.019:   91.5102%  (       23)
00:10:39.203  11528.019 - 11580.659:   91.5409%  (        4)
00:10:39.203  11580.659 - 11633.298:   91.5640%  (        3)
00:10:39.203  11633.298 - 11685.937:   91.6025%  (        5)
00:10:39.203  11685.937 - 11738.577:   91.6410%  (        5)
00:10:39.203  11738.577 - 11791.216:   91.6795%  (        5)
00:10:39.203  11791.216 - 11843.855:   91.7026%  (        3)
00:10:39.203  11843.855 - 11896.495:   91.7180%  (        2)
00:10:39.203  11896.495 - 11949.134:   91.7719%  (        7)
00:10:39.203  11949.134 - 12001.773:   91.8026%  (        4)
00:10:39.203  12001.773 - 12054.413:   91.8411%  (        5)
00:10:39.203  12054.413 - 12107.052:   91.9258%  (       11)
00:10:39.203  12107.052 - 12159.692:   92.0028%  (       10)
00:10:39.203  12159.692 - 12212.331:   92.1567%  (       20)
00:10:39.203  12212.331 - 12264.970:   92.2953%  (       18)
00:10:39.203  12264.970 - 12317.610:   92.4877%  (       25)
00:10:39.203  12317.610 - 12370.249:   92.6493%  (       21)
00:10:39.203  12370.249 - 12422.888:   92.8110%  (       21)
00:10:39.203  12422.888 - 12475.528:   92.9957%  (       24)
00:10:39.203  12475.528 - 12528.167:   93.1496%  (       20)
00:10:39.203  12528.167 - 12580.806:   93.2959%  (       19)
00:10:39.203  12580.806 - 12633.446:   93.3882%  (       12)
00:10:39.203  12633.446 - 12686.085:   93.5191%  (       17)
00:10:39.203  12686.085 - 12738.724:   93.6730%  (       20)
00:10:39.203  12738.724 - 12791.364:   93.7731%  (       13)
00:10:39.203  12791.364 - 12844.003:   93.9193%  (       19)
00:10:39.203  12844.003 - 12896.643:   94.0733%  (       20)
00:10:39.203  12896.643 - 12949.282:   94.1425%  (        9)
00:10:39.203  12949.282 - 13001.921:   94.2118%  (        9)
00:10:39.203  13001.921 - 13054.561:   94.2734%  (        8)
00:10:39.203  13054.561 - 13107.200:   94.3196%  (        6)
00:10:39.203  13107.200 - 13159.839:   94.3504%  (        4)
00:10:39.203  13159.839 - 13212.479:   94.3966%  (        6)
00:10:39.203  13212.479 - 13265.118:   94.4504%  (        7)
00:10:39.203  13265.118 - 13317.757:   94.4812%  (        4)
00:10:39.203  13317.757 - 13370.397:   94.5043%  (        3)
00:10:39.203  13370.397 - 13423.036:   94.5120%  (        1)
00:10:39.203  13423.036 - 13475.676:   94.5274%  (        2)
00:10:39.203  13475.676 - 13580.954:   94.6352%  (       14)
00:10:39.203  13580.954 - 13686.233:   94.8045%  (       22)
00:10:39.203  13686.233 - 13791.512:   94.9276%  (       16)
00:10:39.203  13791.512 - 13896.790:   95.0123%  (       11)
00:10:39.203  13896.790 - 14002.069:   95.0739%  (        8)
00:10:39.203  14317.905 - 14423.184:   95.0816%  (        1)
00:10:39.203  14423.184 - 14528.463:   95.1278%  (        6)
00:10:39.203  14528.463 - 14633.741:   95.1970%  (        9)
00:10:39.203  14633.741 - 14739.020:   95.3510%  (       20)
00:10:39.203  14739.020 - 14844.299:   95.5049%  (       20)
00:10:39.203  14844.299 - 14949.578:   95.6897%  (       24)
00:10:39.203  14949.578 - 15054.856:   95.8821%  (       25)
00:10:39.203  15054.856 - 15160.135:   96.0360%  (       20)
00:10:39.203  15160.135 - 15265.414:   96.1977%  (       21)
00:10:39.203  15265.414 - 15370.692:   96.3362%  (       18)
00:10:39.203  15370.692 - 15475.971:   96.4209%  (       11)
00:10:39.203  15475.971 - 15581.250:   96.5132%  (       12)
00:10:39.203  15581.250 - 15686.529:   96.6056%  (       12)
00:10:39.203  15686.529 - 15791.807:   96.6672%  (        8)
00:10:39.203  15791.807 - 15897.086:   96.7442%  (       10)
00:10:39.203  15897.086 - 16002.365:   96.8673%  (       16)
00:10:39.203  16002.365 - 16107.643:   96.9828%  (       15)
00:10:39.203  16107.643 - 16212.922:   97.0366%  (        7)
00:10:39.203  16212.922 - 16318.201:   97.0443%  (        1)
00:10:39.203  16528.758 - 16634.037:   97.0520%  (        1)
00:10:39.203  16634.037 - 16739.316:   97.1675%  (       15)
00:10:39.203  16739.316 - 16844.594:   97.3060%  (       18)
00:10:39.203  16844.594 - 16949.873:   97.4215%  (       15)
00:10:39.203  16949.873 - 17055.152:   97.4985%  (       10)
00:10:39.203  17055.152 - 17160.431:   97.5369%  (        5)
00:10:39.203  17897.382 - 18002.660:   97.6370%  (       13)
00:10:39.203  18002.660 - 18107.939:   97.6678%  (        4)
00:10:39.203  18107.939 - 18213.218:   97.7833%  (       15)
00:10:39.203  18213.218 - 18318.496:   97.8987%  (       15)
00:10:39.203  18318.496 - 18423.775:   98.0373%  (       18)
00:10:39.203  18423.775 - 18529.054:   98.3067%  (       35)
00:10:39.203  18529.054 - 18634.333:   98.4375%  (       17)
00:10:39.203  18634.333 - 18739.611:   98.5376%  (       13)
00:10:39.203  18739.611 - 18844.890:   98.6222%  (       11)
00:10:39.203  18844.890 - 18950.169:   98.7223%  (       13)
00:10:39.203  18950.169 - 19055.447:   98.8147%  (       12)
00:10:39.203  19055.447 - 19160.726:   98.8685%  (        7)
00:10:39.203  19160.726 - 19266.005:   98.8916%  (        3)
00:10:39.203  19266.005 - 19371.284:   98.9224%  (        4)
00:10:39.203  19371.284 - 19476.562:   98.9455%  (        3)
00:10:39.203  19476.562 - 19581.841:   98.9763%  (        4)
00:10:39.203  19581.841 - 19687.120:   98.9994%  (        3)
00:10:39.203  19687.120 - 19792.398:   99.0148%  (        2)
00:10:39.203  29056.925 - 29267.483:   99.0687%  (        7)
00:10:39.203  29267.483 - 29478.040:   99.1302%  (        8)
00:10:39.203  29478.040 - 29688.598:   99.1918%  (        8)
00:10:39.203  29688.598 - 29899.155:   99.2611%  (        9)
00:10:39.203  29899.155 - 30109.712:   99.3227%  (        8)
00:10:39.203  30109.712 - 30320.270:   99.3842%  (        8)
00:10:39.203  30320.270 - 30530.827:   99.4458%  (        8)
00:10:39.204  30530.827 - 30741.385:   99.5074%  (        8)
00:10:39.204  36847.550 - 37058.108:   99.5459%  (        5)
00:10:39.204  37058.108 - 37268.665:   99.6075%  (        8)
00:10:39.204  37268.665 - 37479.222:   99.6690%  (        8)
00:10:39.204  37479.222 - 37689.780:   99.7306%  (        8)
00:10:39.204  37689.780 - 37900.337:   99.7922%  (        8)
00:10:39.204  37900.337 - 38110.895:   99.8461%  (        7)
00:10:39.204  38110.895 - 38321.452:   99.9076%  (        8)
00:10:39.204  38321.452 - 38532.010:   99.9769%  (        9)
00:10:39.204  38532.010 - 38742.567:  100.0000%  (        3)
00:10:39.204  
00:10:39.204  Latency histogram for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:10:39.204  ==============================================================================
00:10:39.204         Range in us     Cumulative    IO count
00:10:39.204   7790.625 -  7843.264:    0.0154%  (        2)
00:10:39.204   7843.264 -  7895.904:    0.0385%  (        3)
00:10:39.204   7895.904 -  7948.543:    0.1308%  (       12)
00:10:39.204   7948.543 -  8001.182:    0.2232%  (       12)
00:10:39.204   8001.182 -  8053.822:    0.4541%  (       30)
00:10:39.204   8053.822 -  8106.461:    0.7774%  (       42)
00:10:39.204   8106.461 -  8159.100:    1.2931%  (       67)
00:10:39.204   8159.100 -  8211.740:    2.0782%  (      102)
00:10:39.204   8211.740 -  8264.379:    2.9249%  (      110)
00:10:39.204   8264.379 -  8317.018:    3.7792%  (      111)
00:10:39.204   8317.018 -  8369.658:    4.4566%  (       88)
00:10:39.204   8369.658 -  8422.297:    5.3033%  (      110)
00:10:39.204   8422.297 -  8474.937:    6.2654%  (      125)
00:10:39.204   8474.937 -  8527.576:    7.1813%  (      119)
00:10:39.204   8527.576 -  8580.215:    8.6746%  (      194)
00:10:39.204   8580.215 -  8632.855:   10.9221%  (      292)
00:10:39.204   8632.855 -  8685.494:   13.3236%  (      312)
00:10:39.204   8685.494 -  8738.133:   16.9027%  (      465)
00:10:39.204   8738.133 -  8790.773:   20.6897%  (      492)
00:10:39.204   8790.773 -  8843.412:   24.8692%  (      543)
00:10:39.204   8843.412 -  8896.051:   29.2180%  (      565)
00:10:39.204   8896.051 -  8948.691:   33.5822%  (      567)
00:10:39.204   8948.691 -  9001.330:   37.9464%  (      567)
00:10:39.204   9001.330 -  9053.969:   42.0336%  (      531)
00:10:39.204   9053.969 -  9106.609:   45.8282%  (      493)
00:10:39.204   9106.609 -  9159.248:   49.8076%  (      517)
00:10:39.204   9159.248 -  9211.888:   53.4483%  (      473)
00:10:39.204   9211.888 -  9264.527:   56.6656%  (      418)
00:10:39.204   9264.527 -  9317.166:   60.0754%  (      443)
00:10:39.204   9317.166 -  9369.806:   63.1696%  (      402)
00:10:39.204   9369.806 -  9422.445:   66.4255%  (      423)
00:10:39.204   9422.445 -  9475.084:   68.9655%  (      330)
00:10:39.204   9475.084 -  9527.724:   71.2438%  (      296)
00:10:39.204   9527.724 -  9580.363:   73.0757%  (      238)
00:10:39.204   9580.363 -  9633.002:   74.9538%  (      244)
00:10:39.204   9633.002 -  9685.642:   77.0166%  (      268)
00:10:39.204   9685.642 -  9738.281:   78.4868%  (      191)
00:10:39.204   9738.281 -  9790.920:   79.5951%  (      144)
00:10:39.204   9790.920 -  9843.560:   80.7651%  (      152)
00:10:39.204   9843.560 -  9896.199:   82.2352%  (      191)
00:10:39.204   9896.199 -  9948.839:   83.0126%  (      101)
00:10:39.204   9948.839 - 10001.478:   84.2442%  (      160)
00:10:39.204  10001.478 - 10054.117:   85.2602%  (      132)
00:10:39.204  10054.117 - 10106.757:   85.8297%  (       74)
00:10:39.204  10106.757 - 10159.396:   86.2916%  (       60)
00:10:39.204  10159.396 - 10212.035:   86.5994%  (       40)
00:10:39.204  10212.035 - 10264.675:   86.8227%  (       29)
00:10:39.204  10264.675 - 10317.314:   87.0921%  (       35)
00:10:39.204  10317.314 - 10369.953:   87.5231%  (       56)
00:10:39.204  10369.953 - 10422.593:   87.8695%  (       45)
00:10:39.204  10422.593 - 10475.232:   88.2004%  (       43)
00:10:39.204  10475.232 - 10527.871:   88.4775%  (       36)
00:10:39.204  10527.871 - 10580.511:   88.9009%  (       55)
00:10:39.204  10580.511 - 10633.150:   89.0702%  (       22)
00:10:39.204  10633.150 - 10685.790:   89.2318%  (       21)
00:10:39.204  10685.790 - 10738.429:   89.3858%  (       20)
00:10:39.204  10738.429 - 10791.068:   89.6090%  (       29)
00:10:39.204  10791.068 - 10843.708:   89.7091%  (       13)
00:10:39.204  10843.708 - 10896.347:   89.8707%  (       21)
00:10:39.204  10896.347 - 10948.986:   89.9784%  (       14)
00:10:39.204  10948.986 - 11001.626:   90.0939%  (       15)
00:10:39.204  11001.626 - 11054.265:   90.2094%  (       15)
00:10:39.204  11054.265 - 11106.904:   90.3094%  (       13)
00:10:39.204  11106.904 - 11159.544:   90.5018%  (       25)
00:10:39.204  11159.544 - 11212.183:   90.6019%  (       13)
00:10:39.204  11212.183 - 11264.822:   90.6712%  (        9)
00:10:39.204  11264.822 - 11317.462:   90.7405%  (        9)
00:10:39.204  11317.462 - 11370.101:   90.7866%  (        6)
00:10:39.204  11370.101 - 11422.741:   90.8174%  (        4)
00:10:39.204  11422.741 - 11475.380:   90.8559%  (        5)
00:10:39.204  11475.380 - 11528.019:   90.8944%  (        5)
00:10:39.204  11528.019 - 11580.659:   90.9483%  (        7)
00:10:39.204  11580.659 - 11633.298:   90.9868%  (        5)
00:10:39.204  11633.298 - 11685.937:   91.0714%  (       11)
00:10:39.204  11685.937 - 11738.577:   91.1099%  (        5)
00:10:39.204  11738.577 - 11791.216:   91.1484%  (        5)
00:10:39.204  11791.216 - 11843.855:   91.2562%  (       14)
00:10:39.204  11843.855 - 11896.495:   91.4255%  (       22)
00:10:39.204  11896.495 - 11949.134:   91.6487%  (       29)
00:10:39.204  11949.134 - 12001.773:   91.8103%  (       21)
00:10:39.204  12001.773 - 12054.413:   92.0720%  (       34)
00:10:39.204  12054.413 - 12107.052:   92.2645%  (       25)
00:10:39.204  12107.052 - 12159.692:   92.4107%  (       19)
00:10:39.204  12159.692 - 12212.331:   92.5877%  (       23)
00:10:39.204  12212.331 - 12264.970:   92.7340%  (       19)
00:10:39.204  12264.970 - 12317.610:   92.9264%  (       25)
00:10:39.204  12317.610 - 12370.249:   93.1573%  (       30)
00:10:39.204  12370.249 - 12422.888:   93.2959%  (       18)
00:10:39.204  12422.888 - 12475.528:   93.4267%  (       17)
00:10:39.204  12475.528 - 12528.167:   93.5345%  (       14)
00:10:39.204  12528.167 - 12580.806:   93.6192%  (       11)
00:10:39.204  12580.806 - 12633.446:   93.7038%  (       11)
00:10:39.204  12633.446 - 12686.085:   93.7808%  (       10)
00:10:39.204  12686.085 - 12738.724:   93.8808%  (       13)
00:10:39.204  12738.724 - 12791.364:   93.9501%  (        9)
00:10:39.204  12791.364 - 12844.003:   94.0040%  (        7)
00:10:39.204  12844.003 - 12896.643:   94.0579%  (        7)
00:10:39.204  12896.643 - 12949.282:   94.1272%  (        9)
00:10:39.204  12949.282 - 13001.921:   94.1964%  (        9)
00:10:39.204  13001.921 - 13054.561:   94.2734%  (       10)
00:10:39.204  13054.561 - 13107.200:   94.3350%  (        8)
00:10:39.204  13107.200 - 13159.839:   94.3735%  (        5)
00:10:39.204  13159.839 - 13212.479:   94.4119%  (        5)
00:10:39.204  13212.479 - 13265.118:   94.4427%  (        4)
00:10:39.204  13265.118 - 13317.757:   94.4735%  (        4)
00:10:39.204  13317.757 - 13370.397:   94.5043%  (        4)
00:10:39.204  13370.397 - 13423.036:   94.5274%  (        3)
00:10:39.204  13423.036 - 13475.676:   94.5428%  (        2)
00:10:39.204  13475.676 - 13580.954:   94.5736%  (        4)
00:10:39.204  13580.954 - 13686.233:   94.5890%  (        2)
00:10:39.204  13686.233 - 13791.512:   94.5967%  (        1)
00:10:39.204  13791.512 - 13896.790:   94.6198%  (        3)
00:10:39.204  13896.790 - 14002.069:   94.8045%  (       24)
00:10:39.204  14002.069 - 14107.348:   95.1047%  (       39)
00:10:39.204  14107.348 - 14212.627:   95.3279%  (       29)
00:10:39.204  14212.627 - 14317.905:   95.5280%  (       26)
00:10:39.204  14317.905 - 14423.184:   95.6897%  (       21)
00:10:39.204  14423.184 - 14528.463:   95.8898%  (       26)
00:10:39.204  14528.463 - 14633.741:   96.1284%  (       31)
00:10:39.204  14633.741 - 14739.020:   96.3054%  (       23)
00:10:39.204  14739.020 - 14844.299:   96.3824%  (       10)
00:10:39.204  14844.299 - 14949.578:   96.4209%  (        5)
00:10:39.204  14949.578 - 15054.856:   96.4517%  (        4)
00:10:39.204  15054.856 - 15160.135:   96.4748%  (        3)
00:10:39.204  15160.135 - 15265.414:   96.4978%  (        3)
00:10:39.204  15265.414 - 15370.692:   96.5286%  (        4)
00:10:39.204  15370.692 - 15475.971:   96.5517%  (        3)
00:10:39.204  16002.365 - 16107.643:   96.5594%  (        1)
00:10:39.204  16212.922 - 16318.201:   96.5825%  (        3)
00:10:39.204  16318.201 - 16423.480:   96.6210%  (        5)
00:10:39.204  16423.480 - 16528.758:   96.7057%  (       11)
00:10:39.204  16528.758 - 16634.037:   97.0135%  (       40)
00:10:39.204  16634.037 - 16739.316:   97.0443%  (        4)
00:10:39.204  17265.709 - 17370.988:   97.1136%  (        9)
00:10:39.204  17370.988 - 17476.267:   97.2522%  (       18)
00:10:39.204  17476.267 - 17581.545:   97.5754%  (       42)
00:10:39.204  17581.545 - 17686.824:   97.7294%  (       20)
00:10:39.204  17686.824 - 17792.103:   97.8371%  (       14)
00:10:39.204  17792.103 - 17897.382:   97.9449%  (       14)
00:10:39.204  17897.382 - 18002.660:   97.9988%  (        7)
00:10:39.204  18002.660 - 18107.939:   98.0526%  (        7)
00:10:39.204  18107.939 - 18213.218:   98.1219%  (        9)
00:10:39.204  18213.218 - 18318.496:   98.1989%  (       10)
00:10:39.204  18318.496 - 18423.775:   98.2759%  (       10)
00:10:39.204  18423.775 - 18529.054:   98.3220%  (        6)
00:10:39.204  18529.054 - 18634.333:   98.3759%  (        7)
00:10:39.204  18634.333 - 18739.611:   98.4452%  (        9)
00:10:39.205  18739.611 - 18844.890:   98.7300%  (       37)
00:10:39.205  18844.890 - 18950.169:   98.8300%  (       13)
00:10:39.205  18950.169 - 19055.447:   98.8839%  (        7)
00:10:39.205  19055.447 - 19160.726:   98.9378%  (        7)
00:10:39.205  19160.726 - 19266.005:   98.9763%  (        5)
00:10:39.205  19266.005 - 19371.284:   98.9994%  (        3)
00:10:39.205  19371.284 - 19476.562:   99.0148%  (        2)
00:10:39.205  27793.581 - 28004.138:   99.0764%  (        8)
00:10:39.205  28004.138 - 28214.696:   99.1456%  (        9)
00:10:39.205  28214.696 - 28425.253:   99.2072%  (        8)
00:10:39.205  28425.253 - 28635.810:   99.2688%  (        8)
00:10:39.205  28635.810 - 28846.368:   99.3304%  (        8)
00:10:39.205  28846.368 - 29056.925:   99.3996%  (        9)
00:10:39.205  29056.925 - 29267.483:   99.4612%  (        8)
00:10:39.205  29267.483 - 29478.040:   99.5074%  (        6)
00:10:39.205  34952.533 - 35163.091:   99.5151%  (        1)
00:10:39.205  35163.091 - 35373.648:   99.5767%  (        8)
00:10:39.205  35373.648 - 35584.206:   99.6382%  (        8)
00:10:39.205  35584.206 - 35794.763:   99.6998%  (        8)
00:10:39.205  35794.763 - 36005.320:   99.7691%  (        9)
00:10:39.205  36005.320 - 36215.878:   99.8307%  (        8)
00:10:39.205  36215.878 - 36426.435:   99.8999%  (        9)
00:10:39.205  36426.435 - 36636.993:   99.9538%  (        7)
00:10:39.205  36636.993 - 36847.550:  100.0000%  (        6)
00:10:39.205  
00:10:39.205  Latency histogram for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:10:39.205  ==============================================================================
00:10:39.205         Range in us     Cumulative    IO count
00:10:39.205   7737.986 -  7790.625:    0.0153%  (        2)
00:10:39.205   7843.264 -  7895.904:    0.0306%  (        2)
00:10:39.205   7895.904 -  7948.543:    0.1072%  (       10)
00:10:39.205   7948.543 -  8001.182:    0.2604%  (       20)
00:10:39.205   8001.182 -  8053.822:    0.6587%  (       52)
00:10:39.205   8053.822 -  8106.461:    1.2714%  (       80)
00:10:39.205   8106.461 -  8159.100:    1.8919%  (       81)
00:10:39.205   8159.100 -  8211.740:    2.7267%  (      109)
00:10:39.205   8211.740 -  8264.379:    3.4084%  (       89)
00:10:39.205   8264.379 -  8317.018:    4.1973%  (      103)
00:10:39.205   8317.018 -  8369.658:    4.8024%  (       79)
00:10:39.205   8369.658 -  8422.297:    5.3998%  (       78)
00:10:39.205   8422.297 -  8474.937:    6.1734%  (      101)
00:10:39.205   8474.937 -  8527.576:    7.3683%  (      156)
00:10:39.205   8527.576 -  8580.215:    8.9691%  (      209)
00:10:39.205   8580.215 -  8632.855:   11.5809%  (      341)
00:10:39.205   8632.855 -  8685.494:   14.7289%  (      411)
00:10:39.205   8685.494 -  8738.133:   18.4053%  (      480)
00:10:39.205   8738.133 -  8790.773:   22.2963%  (      508)
00:10:39.205   8790.773 -  8843.412:   26.0570%  (      491)
00:10:39.205   8843.412 -  8896.051:   29.8330%  (      493)
00:10:39.205   8896.051 -  8948.691:   33.7776%  (      515)
00:10:39.205   8948.691 -  9001.330:   37.2855%  (      458)
00:10:39.205   9001.330 -  9053.969:   41.2760%  (      521)
00:10:39.205   9053.969 -  9106.609:   44.8223%  (      463)
00:10:39.205   9106.609 -  9159.248:   48.5141%  (      482)
00:10:39.205   9159.248 -  9211.888:   51.7004%  (      416)
00:10:39.205   9211.888 -  9264.527:   54.8483%  (      411)
00:10:39.205   9264.527 -  9317.166:   58.0499%  (      418)
00:10:39.205   9317.166 -  9369.806:   61.6728%  (      473)
00:10:39.205   9369.806 -  9422.445:   65.6173%  (      515)
00:10:39.205   9422.445 -  9475.084:   68.3670%  (      359)
00:10:39.205   9475.084 -  9527.724:   71.1014%  (      357)
00:10:39.205   9527.724 -  9580.363:   73.3456%  (      293)
00:10:39.205   9580.363 -  9633.002:   75.2221%  (      245)
00:10:39.205   9633.002 -  9685.642:   76.8842%  (      217)
00:10:39.205   9685.642 -  9738.281:   78.6918%  (      236)
00:10:39.205   9738.281 -  9790.920:   80.2237%  (      200)
00:10:39.205   9790.920 -  9843.560:   81.9470%  (      225)
00:10:39.205   9843.560 -  9896.199:   82.9350%  (      129)
00:10:39.205   9896.199 -  9948.839:   83.7086%  (      101)
00:10:39.205   9948.839 - 10001.478:   84.1759%  (       61)
00:10:39.205  10001.478 - 10054.117:   84.5282%  (       46)
00:10:39.205  10054.117 - 10106.757:   85.0797%  (       72)
00:10:39.205  10106.757 - 10159.396:   85.6158%  (       70)
00:10:39.205  10159.396 - 10212.035:   85.9145%  (       39)
00:10:39.205  10212.035 - 10264.675:   86.3741%  (       60)
00:10:39.205  10264.675 - 10317.314:   86.6881%  (       41)
00:10:39.205  10317.314 - 10369.953:   86.9485%  (       34)
00:10:39.205  10369.953 - 10422.593:   87.3698%  (       55)
00:10:39.205  10422.593 - 10475.232:   87.7298%  (       47)
00:10:39.205  10475.232 - 10527.871:   87.8370%  (       14)
00:10:39.205  10527.871 - 10580.511:   88.0668%  (       30)
00:10:39.205  10580.511 - 10633.150:   88.1740%  (       14)
00:10:39.205  10633.150 - 10685.790:   88.3425%  (       22)
00:10:39.205  10685.790 - 10738.429:   88.4727%  (       17)
00:10:39.205  10738.429 - 10791.068:   88.7102%  (       31)
00:10:39.205  10791.068 - 10843.708:   88.9782%  (       35)
00:10:39.205  10843.708 - 10896.347:   89.0855%  (       14)
00:10:39.205  10896.347 - 10948.986:   89.1391%  (        7)
00:10:39.205  10948.986 - 11001.626:   89.2004%  (        8)
00:10:39.205  11001.626 - 11054.265:   89.2923%  (       12)
00:10:39.205  11054.265 - 11106.904:   89.3919%  (       13)
00:10:39.205  11106.904 - 11159.544:   89.5067%  (       15)
00:10:39.205  11159.544 - 11212.183:   89.6599%  (       20)
00:10:39.205  11212.183 - 11264.822:   89.7518%  (       12)
00:10:39.205  11264.822 - 11317.462:   89.8667%  (       15)
00:10:39.205  11317.462 - 11370.101:   89.9740%  (       14)
00:10:39.205  11370.101 - 11422.741:   90.0812%  (       14)
00:10:39.205  11422.741 - 11475.380:   90.1425%  (        8)
00:10:39.205  11475.380 - 11528.019:   90.1808%  (        5)
00:10:39.205  11528.019 - 11580.659:   90.2420%  (        8)
00:10:39.205  11580.659 - 11633.298:   90.2880%  (        6)
00:10:39.205  11633.298 - 11685.937:   90.3646%  (       10)
00:10:39.205  11685.937 - 11738.577:   90.4488%  (       11)
00:10:39.205  11738.577 - 11791.216:   90.6327%  (       24)
00:10:39.205  11791.216 - 11843.855:   90.7705%  (       18)
00:10:39.205  11843.855 - 11896.495:   90.9620%  (       25)
00:10:39.205  11896.495 - 11949.134:   91.1535%  (       25)
00:10:39.205  11949.134 - 12001.773:   91.2837%  (       17)
00:10:39.205  12001.773 - 12054.413:   91.4982%  (       28)
00:10:39.205  12054.413 - 12107.052:   91.7279%  (       30)
00:10:39.205  12107.052 - 12159.692:   91.8811%  (       20)
00:10:39.205  12159.692 - 12212.331:   92.0037%  (       16)
00:10:39.205  12212.331 - 12264.970:   92.1569%  (       20)
00:10:39.205  12264.970 - 12317.610:   92.3100%  (       20)
00:10:39.205  12317.610 - 12370.249:   92.4479%  (       18)
00:10:39.205  12370.249 - 12422.888:   92.5015%  (        7)
00:10:39.205  12422.888 - 12475.528:   92.5551%  (        7)
00:10:39.205  12475.528 - 12528.167:   92.6088%  (        7)
00:10:39.205  12528.167 - 12580.806:   92.7083%  (       13)
00:10:39.205  12580.806 - 12633.446:   92.7849%  (       10)
00:10:39.205  12633.446 - 12686.085:   92.8768%  (       12)
00:10:39.205  12686.085 - 12738.724:   93.1602%  (       37)
00:10:39.205  12738.724 - 12791.364:   93.3747%  (       28)
00:10:39.205  12791.364 - 12844.003:   93.5662%  (       25)
00:10:39.205  12844.003 - 12896.643:   93.7347%  (       22)
00:10:39.205  12896.643 - 12949.282:   93.8725%  (       18)
00:10:39.205  12949.282 - 13001.921:   93.9798%  (       14)
00:10:39.205  13001.921 - 13054.561:   94.0870%  (       14)
00:10:39.205  13054.561 - 13107.200:   94.1253%  (        5)
00:10:39.205  13107.200 - 13159.839:   94.1406%  (        2)
00:10:39.205  13159.839 - 13212.479:   94.1559%  (        2)
00:10:39.205  13212.479 - 13265.118:   94.1713%  (        2)
00:10:39.205  13265.118 - 13317.757:   94.2402%  (        9)
00:10:39.205  13317.757 - 13370.397:   94.2938%  (        7)
00:10:39.205  13370.397 - 13423.036:   94.3474%  (        7)
00:10:39.205  13423.036 - 13475.676:   94.4164%  (        9)
00:10:39.205  13475.676 - 13580.954:   94.6078%  (       25)
00:10:39.205  13580.954 - 13686.233:   94.8376%  (       30)
00:10:39.205  13686.233 - 13791.512:   94.9525%  (       15)
00:10:39.205  13791.512 - 13896.790:   95.0980%  (       19)
00:10:39.205  13896.790 - 14002.069:   95.3202%  (       29)
00:10:39.205  14002.069 - 14107.348:   95.5193%  (       26)
00:10:39.205  14107.348 - 14212.627:   95.7491%  (       30)
00:10:39.205  14212.627 - 14317.905:   95.8793%  (       17)
00:10:39.205  14317.905 - 14423.184:   96.0325%  (       20)
00:10:39.205  14423.184 - 14528.463:   96.2929%  (       34)
00:10:39.205  14528.463 - 14633.741:   96.4384%  (       19)
00:10:39.205  14633.741 - 14739.020:   96.5150%  (       10)
00:10:39.205  14739.020 - 14844.299:   96.5380%  (        3)
00:10:39.205  14844.299 - 14949.578:   96.5610%  (        3)
00:10:39.205  14949.578 - 15054.856:   96.5686%  (        1)
00:10:39.205  16002.365 - 16107.643:   96.5763%  (        1)
00:10:39.205  16423.480 - 16528.758:   96.5839%  (        1)
00:10:39.205  16528.758 - 16634.037:   96.6146%  (        4)
00:10:39.205  16634.037 - 16739.316:   96.7525%  (       18)
00:10:39.205  16739.316 - 16844.594:   96.8980%  (       19)
00:10:39.205  16844.594 - 16949.873:   96.9210%  (        3)
00:10:39.205  16949.873 - 17055.152:   96.9439%  (        3)
00:10:39.205  17055.152 - 17160.431:   96.9669%  (        3)
00:10:39.205  17160.431 - 17265.709:   97.1124%  (       19)
00:10:39.205  17265.709 - 17370.988:   97.2733%  (       21)
00:10:39.205  17370.988 - 17476.267:   97.4188%  (       19)
00:10:39.205  17476.267 - 17581.545:   97.5107%  (       12)
00:10:39.205  17581.545 - 17686.824:   97.5797%  (        9)
00:10:39.205  17686.824 - 17792.103:   97.6409%  (        8)
00:10:39.205  17792.103 - 17897.382:   97.7175%  (       10)
00:10:39.205  17897.382 - 18002.660:   97.7788%  (        8)
00:10:39.205  18002.660 - 18107.939:   97.9243%  (       19)
00:10:39.205  18107.939 - 18213.218:   98.0928%  (       22)
00:10:39.205  18213.218 - 18318.496:   98.2996%  (       27)
00:10:39.205  18318.496 - 18423.775:   98.4145%  (       15)
00:10:39.205  18423.775 - 18529.054:   98.4758%  (        8)
00:10:39.205  18529.054 - 18634.333:   98.5064%  (        4)
00:10:39.205  18634.333 - 18739.611:   98.5294%  (        3)
00:10:39.205  18739.611 - 18844.890:   98.5754%  (        6)
00:10:39.205  18844.890 - 18950.169:   98.6520%  (       10)
00:10:39.205  18950.169 - 19055.447:   98.9124%  (       34)
00:10:39.205  19055.447 - 19160.726:   98.9354%  (        3)
00:10:39.205  19160.726 - 19266.005:   98.9583%  (        3)
00:10:39.205  19266.005 - 19371.284:   98.9813%  (        3)
00:10:39.205  19371.284 - 19476.562:   99.0196%  (        5)
00:10:39.205  19476.562 - 19581.841:   99.0656%  (        6)
00:10:39.205  19581.841 - 19687.120:   99.0962%  (        4)
00:10:39.205  19687.120 - 19792.398:   99.1268%  (        4)
00:10:39.205  19792.398 - 19897.677:   99.1651%  (        5)
00:10:39.205  19897.677 - 20002.956:   99.1958%  (        4)
00:10:39.205  20002.956 - 20108.235:   99.2264%  (        4)
00:10:39.205  20108.235 - 20213.513:   99.2570%  (        4)
00:10:39.205  20213.513 - 20318.792:   99.2953%  (        5)
00:10:39.205  20318.792 - 20424.071:   99.3260%  (        4)
00:10:39.205  20424.071 - 20529.349:   99.3566%  (        4)
00:10:39.205  20529.349 - 20634.628:   99.3873%  (        4)
00:10:39.205  20634.628 - 20739.907:   99.4179%  (        4)
00:10:39.205  20739.907 - 20845.186:   99.4485%  (        4)
00:10:39.205  20845.186 - 20950.464:   99.4792%  (        4)
00:10:39.206  20950.464 - 21055.743:   99.5098%  (        4)
00:10:39.206  27372.466 - 27583.023:   99.5711%  (        8)
00:10:39.206  27583.023 - 27793.581:   99.6324%  (        8)
00:10:39.206  27793.581 - 28004.138:   99.6860%  (        7)
00:10:39.206  28004.138 - 28214.696:   99.7472%  (        8)
00:10:39.206  28214.696 - 28425.253:   99.8162%  (        9)
00:10:39.206  28425.253 - 28635.810:   99.8698%  (        7)
00:10:39.206  28635.810 - 28846.368:   99.9387%  (        9)
00:10:39.206  28846.368 - 29056.925:  100.0000%  (        8)
00:10:39.206  
00:10:39.206   16:21:08 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:10:39.206  
00:10:39.206  real	0m2.660s
00:10:39.206  user	0m2.253s
00:10:39.206  sys	0m0.305s
00:10:39.206   16:21:08 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:39.206  ************************************
00:10:39.206  END TEST nvme_perf
00:10:39.206  ************************************
00:10:39.206   16:21:08 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x
00:10:39.206   16:21:08 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:10:39.206   16:21:08 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:10:39.206   16:21:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:39.206   16:21:08 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:39.206  ************************************
00:10:39.206  START TEST nvme_hello_world
00:10:39.206  ************************************
00:10:39.206   16:21:08 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:10:39.465  Initializing NVMe Controllers
00:10:39.465  Attached to 0000:00:10.0
00:10:39.465    Namespace ID: 1 size: 6GB
00:10:39.465  Attached to 0000:00:11.0
00:10:39.465    Namespace ID: 1 size: 5GB
00:10:39.465  Attached to 0000:00:13.0
00:10:39.465    Namespace ID: 1 size: 1GB
00:10:39.465  Attached to 0000:00:12.0
00:10:39.465    Namespace ID: 1 size: 4GB
00:10:39.465    Namespace ID: 2 size: 4GB
00:10:39.465    Namespace ID: 3 size: 4GB
00:10:39.465  Initialization complete.
00:10:39.465  INFO: using host memory buffer for IO
00:10:39.465  Hello world!
00:10:39.465  INFO: using host memory buffer for IO
00:10:39.465  Hello world!
00:10:39.465  INFO: using host memory buffer for IO
00:10:39.465  Hello world!
00:10:39.465  INFO: using host memory buffer for IO
00:10:39.465  Hello world!
00:10:39.465  INFO: using host memory buffer for IO
00:10:39.465  Hello world!
00:10:39.465  INFO: using host memory buffer for IO
00:10:39.465  Hello world!
00:10:39.465  
00:10:39.465  real	0m0.308s
00:10:39.465  user	0m0.116s
00:10:39.465  sys	0m0.147s
00:10:39.465   16:21:08 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:39.465   16:21:08 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x
00:10:39.465  ************************************
00:10:39.465  END TEST nvme_hello_world
00:10:39.465  ************************************
00:10:39.724   16:21:08 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:10:39.724   16:21:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:39.724   16:21:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:39.724   16:21:08 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:39.724  ************************************
00:10:39.724  START TEST nvme_sgl
00:10:39.724  ************************************
00:10:39.724   16:21:08 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:10:39.983  0000:00:10.0: build_io_request_0 Invalid IO length parameter
00:10:39.983  0000:00:10.0: build_io_request_1 Invalid IO length parameter
00:10:39.983  0000:00:10.0: build_io_request_3 Invalid IO length parameter
00:10:39.983  0000:00:10.0: build_io_request_8 Invalid IO length parameter
00:10:39.983  0000:00:10.0: build_io_request_9 Invalid IO length parameter
00:10:39.983  0000:00:10.0: build_io_request_11 Invalid IO length parameter
00:10:39.983  0000:00:11.0: build_io_request_0 Invalid IO length parameter
00:10:39.983  0000:00:11.0: build_io_request_1 Invalid IO length parameter
00:10:39.983  0000:00:11.0: build_io_request_3 Invalid IO length parameter
00:10:39.983  0000:00:11.0: build_io_request_8 Invalid IO length parameter
00:10:39.983  0000:00:11.0: build_io_request_9 Invalid IO length parameter
00:10:39.983  0000:00:11.0: build_io_request_11 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_0 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_1 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_2 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_3 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_4 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_5 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_6 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_7 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_8 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_9 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_10 Invalid IO length parameter
00:10:39.983  0000:00:13.0: build_io_request_11 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_0 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_1 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_2 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_3 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_4 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_5 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_6 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_7 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_8 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_9 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_10 Invalid IO length parameter
00:10:39.983  0000:00:12.0: build_io_request_11 Invalid IO length parameter
00:10:39.983  NVMe Readv/Writev Request test
00:10:39.983  Attached to 0000:00:10.0
00:10:39.983  Attached to 0000:00:11.0
00:10:39.983  Attached to 0000:00:13.0
00:10:39.983  Attached to 0000:00:12.0
00:10:39.983  0000:00:10.0: build_io_request_2 test passed
00:10:39.983  0000:00:10.0: build_io_request_4 test passed
00:10:39.983  0000:00:10.0: build_io_request_5 test passed
00:10:39.983  0000:00:10.0: build_io_request_6 test passed
00:10:39.983  0000:00:10.0: build_io_request_7 test passed
00:10:39.983  0000:00:10.0: build_io_request_10 test passed
00:10:39.983  0000:00:11.0: build_io_request_2 test passed
00:10:39.983  0000:00:11.0: build_io_request_4 test passed
00:10:39.983  0000:00:11.0: build_io_request_5 test passed
00:10:39.983  0000:00:11.0: build_io_request_6 test passed
00:10:39.983  0000:00:11.0: build_io_request_7 test passed
00:10:39.983  0000:00:11.0: build_io_request_10 test passed
00:10:39.983  Cleaning up...
00:10:39.983  
00:10:39.983  real	0m0.362s
00:10:39.983  user	0m0.176s
00:10:39.983  sys	0m0.138s
00:10:39.983   16:21:09 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:39.983   16:21:09 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x
00:10:39.983  ************************************
00:10:39.983  END TEST nvme_sgl
00:10:39.983  ************************************
00:10:39.983   16:21:09 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:10:39.983   16:21:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:39.983   16:21:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:39.983   16:21:09 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:39.983  ************************************
00:10:39.983  START TEST nvme_e2edp
00:10:39.983  ************************************
00:10:39.983   16:21:09 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:10:40.242  NVMe Write/Read with End-to-End data protection test
00:10:40.242  Attached to 0000:00:10.0
00:10:40.242  Attached to 0000:00:11.0
00:10:40.242  Attached to 0000:00:13.0
00:10:40.242  Attached to 0000:00:12.0
00:10:40.242  Cleaning up...
00:10:40.242  
00:10:40.242  real	0m0.273s
00:10:40.242  user	0m0.094s
00:10:40.242  sys	0m0.138s
00:10:40.242   16:21:09 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:40.242   16:21:09 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x
00:10:40.242  ************************************
00:10:40.242  END TEST nvme_e2edp
00:10:40.242  ************************************
00:10:40.501   16:21:09 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:10:40.501   16:21:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:40.501   16:21:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:40.501   16:21:09 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:40.501  ************************************
00:10:40.501  START TEST nvme_reserve
00:10:40.501  ************************************
00:10:40.501   16:21:09 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:10:40.760  =====================================================
00:10:40.760  NVMe Controller at PCI bus 0, device 16, function 0
00:10:40.760  =====================================================
00:10:40.760  Reservations:                Not Supported
00:10:40.760  =====================================================
00:10:40.760  NVMe Controller at PCI bus 0, device 17, function 0
00:10:40.760  =====================================================
00:10:40.760  Reservations:                Not Supported
00:10:40.760  =====================================================
00:10:40.760  NVMe Controller at PCI bus 0, device 19, function 0
00:10:40.760  =====================================================
00:10:40.760  Reservations:                Not Supported
00:10:40.760  =====================================================
00:10:40.760  NVMe Controller at PCI bus 0, device 18, function 0
00:10:40.760  =====================================================
00:10:40.760  Reservations:                Not Supported
00:10:40.760  Reservation test passed
00:10:40.760  
00:10:40.760  real	0m0.295s
00:10:40.760  user	0m0.095s
00:10:40.760  sys	0m0.153s
00:10:40.760   16:21:09 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:40.760  ************************************
00:10:40.760   16:21:09 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x
00:10:40.760  END TEST nvme_reserve
00:10:40.760  ************************************
00:10:40.760   16:21:09 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:10:40.760   16:21:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:40.760   16:21:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:40.760   16:21:09 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:40.760  ************************************
00:10:40.760  START TEST nvme_err_injection
00:10:40.760  ************************************
00:10:40.760   16:21:09 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:10:41.019  NVMe Error Injection test
00:10:41.019  Attached to 0000:00:10.0
00:10:41.019  Attached to 0000:00:11.0
00:10:41.019  Attached to 0000:00:13.0
00:10:41.019  Attached to 0000:00:12.0
00:10:41.019  0000:00:10.0: get features failed as expected
00:10:41.019  0000:00:11.0: get features failed as expected
00:10:41.019  0000:00:13.0: get features failed as expected
00:10:41.019  0000:00:12.0: get features failed as expected
00:10:41.019  0000:00:10.0: get features successfully as expected
00:10:41.019  0000:00:11.0: get features successfully as expected
00:10:41.019  0000:00:13.0: get features successfully as expected
00:10:41.019  0000:00:12.0: get features successfully as expected
00:10:41.019  0000:00:11.0: read failed as expected
00:10:41.019  0000:00:13.0: read failed as expected
00:10:41.019  0000:00:12.0: read failed as expected
00:10:41.019  0000:00:10.0: read failed as expected
00:10:41.019  0000:00:11.0: read successfully as expected
00:10:41.019  0000:00:13.0: read successfully as expected
00:10:41.019  0000:00:12.0: read successfully as expected
00:10:41.019  0000:00:10.0: read successfully as expected
00:10:41.019  Cleaning up...
00:10:41.019  
00:10:41.019  real	0m0.311s
00:10:41.019  user	0m0.116s
00:10:41.019  sys	0m0.144s
00:10:41.019   16:21:10 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:41.019   16:21:10 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x
00:10:41.019  ************************************
00:10:41.019  END TEST nvme_err_injection
00:10:41.019  ************************************
00:10:41.019   16:21:10 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:10:41.019   16:21:10 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']'
00:10:41.019   16:21:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:41.019   16:21:10 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:41.019  ************************************
00:10:41.019  START TEST nvme_overhead
00:10:41.019  ************************************
00:10:41.019   16:21:10 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:10:42.397  Initializing NVMe Controllers
00:10:42.397  Attached to 0000:00:10.0
00:10:42.397  Attached to 0000:00:11.0
00:10:42.397  Attached to 0000:00:13.0
00:10:42.397  Attached to 0000:00:12.0
00:10:42.397  Initialization complete. Launching workers.
00:10:42.397  submit (in ns)   avg, min, max =  13537.4,  10792.8,  48717.3
00:10:42.397  complete (in ns) avg, min, max =   8474.4,   7849.8,  79370.3
00:10:42.397  
00:10:42.397  Submit histogram
00:10:42.397  ================
00:10:42.397         Range in us     Cumulative     Count
00:10:42.397     10.744 -    10.795:    0.0177%  (        1)
00:10:42.397     10.847 -    10.898:    0.0354%  (        1)
00:10:42.397     11.309 -    11.361:    0.0532%  (        1)
00:10:42.397     12.080 -    12.132:    0.0709%  (        1)
00:10:42.397     12.132 -    12.183:    0.0886%  (        1)
00:10:42.397     12.183 -    12.235:    0.1595%  (        4)
00:10:42.397     12.235 -    12.286:    0.1949%  (        2)
00:10:42.397     12.286 -    12.337:    0.3721%  (       10)
00:10:42.397     12.337 -    12.389:    0.5848%  (       12)
00:10:42.397     12.389 -    12.440:    1.1873%  (       34)
00:10:42.397     12.440 -    12.492:    1.9316%  (       42)
00:10:42.397     12.492 -    12.543:    3.1721%  (       70)
00:10:42.397     12.543 -    12.594:    4.7315%  (       88)
00:10:42.397     12.594 -    12.646:    6.6631%  (      109)
00:10:42.397     12.646 -    12.697:    8.6656%  (      113)
00:10:42.397     12.697 -    12.749:   10.8099%  (      121)
00:10:42.397     12.749 -    12.800:   12.5997%  (      101)
00:10:42.397     12.800 -    12.851:   14.5490%  (      110)
00:10:42.397     12.851 -    12.903:   16.3388%  (      101)
00:10:42.397     12.903 -    12.954:   18.4299%  (      118)
00:10:42.397     12.954 -    13.006:   20.5033%  (      117)
00:10:42.397     13.006 -    13.057:   23.7285%  (      182)
00:10:42.397     13.057 -    13.108:   28.1588%  (      250)
00:10:42.397     13.108 -    13.160:   34.0776%  (      334)
00:10:42.397     13.160 -    13.263:   48.1836%  (      796)
00:10:42.397     13.263 -    13.365:   59.9681%  (      665)
00:10:42.397     13.365 -    13.468:   70.1577%  (      575)
00:10:42.397     13.468 -    13.571:   78.2208%  (      455)
00:10:42.397     13.571 -    13.674:   84.8662%  (      375)
00:10:42.397     13.674 -    13.777:   88.8889%  (      227)
00:10:42.397     13.777 -    13.880:   91.5825%  (      152)
00:10:42.397     13.880 -    13.982:   93.0711%  (       84)
00:10:42.397     13.982 -    14.085:   93.9040%  (       47)
00:10:42.397     14.085 -    14.188:   94.2761%  (       21)
00:10:42.397     14.188 -    14.291:   94.5419%  (       15)
00:10:42.397     14.291 -    14.394:   94.6482%  (        6)
00:10:42.397     14.394 -    14.496:   94.7368%  (        5)
00:10:42.397     14.496 -    14.599:   94.7900%  (        3)
00:10:42.397     14.702 -    14.805:   94.8254%  (        2)
00:10:42.397     15.319 -    15.422:   94.8609%  (        2)
00:10:42.397     15.524 -    15.627:   94.8963%  (        2)
00:10:42.397     15.730 -    15.833:   94.9141%  (        1)
00:10:42.397     15.833 -    15.936:   94.9318%  (        1)
00:10:42.397     15.936 -    16.039:   94.9495%  (        1)
00:10:42.397     16.141 -    16.244:   94.9672%  (        1)
00:10:42.397     16.450 -    16.553:   94.9849%  (        1)
00:10:42.397     16.553 -    16.655:   95.0027%  (        1)
00:10:42.397     16.655 -    16.758:   95.0558%  (        3)
00:10:42.397     16.758 -    16.861:   95.0913%  (        2)
00:10:42.397     16.861 -    16.964:   95.1799%  (        5)
00:10:42.397     16.964 -    17.067:   95.2508%  (        4)
00:10:42.398     17.067 -    17.169:   95.3571%  (        6)
00:10:42.398     17.169 -    17.272:   95.5166%  (        9)
00:10:42.398     17.272 -    17.375:   95.6583%  (        8)
00:10:42.398     17.375 -    17.478:   95.8710%  (       12)
00:10:42.398     17.478 -    17.581:   96.1545%  (       16)
00:10:42.398     17.581 -    17.684:   96.4381%  (       16)
00:10:42.398     17.684 -    17.786:   96.6862%  (       14)
00:10:42.398     17.786 -    17.889:   96.8988%  (       12)
00:10:42.398     17.889 -    17.992:   97.0406%  (        8)
00:10:42.398     17.992 -    18.095:   97.2001%  (        9)
00:10:42.398     18.095 -    18.198:   97.3950%  (       11)
00:10:42.398     18.198 -    18.300:   97.5013%  (        6)
00:10:42.398     18.300 -    18.403:   97.5899%  (        5)
00:10:42.398     18.403 -    18.506:   97.7317%  (        8)
00:10:42.398     18.506 -    18.609:   97.7849%  (        3)
00:10:42.398     18.609 -    18.712:   97.9089%  (        7)
00:10:42.398     18.712 -    18.814:   98.1038%  (       11)
00:10:42.398     18.814 -    18.917:   98.2633%  (        9)
00:10:42.398     18.917 -    19.020:   98.4051%  (        8)
00:10:42.398     19.020 -    19.123:   98.5646%  (        9)
00:10:42.398     19.123 -    19.226:   98.6532%  (        5)
00:10:42.398     19.226 -    19.329:   98.7772%  (        7)
00:10:42.398     19.329 -    19.431:   98.8836%  (        6)
00:10:42.398     19.431 -    19.534:   98.9722%  (        5)
00:10:42.398     19.534 -    19.637:   99.0608%  (        5)
00:10:42.398     19.637 -    19.740:   99.0962%  (        2)
00:10:42.398     19.740 -    19.843:   99.2734%  (       10)
00:10:42.398     19.843 -    19.945:   99.3089%  (        2)
00:10:42.398     19.945 -    20.048:   99.3975%  (        5)
00:10:42.398     20.048 -    20.151:   99.4506%  (        3)
00:10:42.398     20.151 -    20.254:   99.5215%  (        4)
00:10:42.398     20.254 -    20.357:   99.5570%  (        2)
00:10:42.398     20.357 -    20.459:   99.5747%  (        1)
00:10:42.398     20.459 -    20.562:   99.5924%  (        1)
00:10:42.398     20.562 -    20.665:   99.6101%  (        1)
00:10:42.398     20.665 -    20.768:   99.6456%  (        2)
00:10:42.398     20.768 -    20.871:   99.6633%  (        1)
00:10:42.398     21.796 -    21.899:   99.6810%  (        1)
00:10:42.398     22.413 -    22.516:   99.7165%  (        2)
00:10:42.398     22.721 -    22.824:   99.7519%  (        2)
00:10:42.398     22.824 -    22.927:   99.7696%  (        1)
00:10:42.398     23.338 -    23.441:   99.7873%  (        1)
00:10:42.398     23.441 -    23.544:   99.8051%  (        1)
00:10:42.398     23.749 -    23.852:   99.8228%  (        1)
00:10:42.398     24.366 -    24.469:   99.8405%  (        1)
00:10:42.398     25.703 -    25.806:   99.8582%  (        1)
00:10:42.398     26.011 -    26.114:   99.8760%  (        1)
00:10:42.398     26.320 -    26.525:   99.8937%  (        1)
00:10:42.398     27.759 -    27.965:   99.9114%  (        1)
00:10:42.398     27.965 -    28.170:   99.9291%  (        1)
00:10:42.398     31.666 -    31.871:   99.9468%  (        1)
00:10:42.398     32.694 -    32.900:   99.9646%  (        1)
00:10:42.398     44.620 -    44.826:   99.9823%  (        1)
00:10:42.398     48.527 -    48.733:  100.0000%  (        1)
00:10:42.398  
00:10:42.398  Complete histogram
00:10:42.398  ==================
00:10:42.398         Range in us     Cumulative     Count
00:10:42.398      7.814 -     7.865:    0.0354%  (        2)
00:10:42.398      7.865 -     7.916:    1.0987%  (       60)
00:10:42.398      7.916 -     7.968:    8.3289%  (      408)
00:10:42.398      7.968 -     8.019:   24.1538%  (      893)
00:10:42.398      8.019 -     8.071:   36.0624%  (      672)
00:10:42.398      8.071 -     8.122:   46.4469%  (      586)
00:10:42.398      8.122 -     8.173:   57.0973%  (      601)
00:10:42.398      8.173 -     8.225:   63.3174%  (      351)
00:10:42.398      8.225 -     8.276:   67.5527%  (      239)
00:10:42.398      8.276 -     8.328:   69.5020%  (      110)
00:10:42.398      8.328 -     8.379:   70.6539%  (       65)
00:10:42.398      8.379 -     8.431:   71.2741%  (       35)
00:10:42.398      8.431 -     8.482:   71.7172%  (       25)
00:10:42.398      8.482 -     8.533:   72.0007%  (       16)
00:10:42.398      8.533 -     8.585:   72.3551%  (       20)
00:10:42.398      8.585 -     8.636:   73.4893%  (       64)
00:10:42.398      8.636 -     8.688:   75.5095%  (      114)
00:10:42.398      8.688 -     8.739:   76.5727%  (       60)
00:10:42.398      8.739 -     8.790:   77.8309%  (       71)
00:10:42.398      8.790 -     8.842:   80.4891%  (      150)
00:10:42.398      8.842 -     8.893:   83.6435%  (      178)
00:10:42.398      8.893 -     8.945:   85.6637%  (      114)
00:10:42.398      8.945 -     8.996:   88.1269%  (      139)
00:10:42.398      8.996 -     9.047:   90.5192%  (      135)
00:10:42.398      9.047 -     9.099:   92.1850%  (       94)
00:10:42.398      9.099 -     9.150:   93.6736%  (       84)
00:10:42.398      9.150 -     9.202:   95.1090%  (       81)
00:10:42.398      9.202 -     9.253:   96.0659%  (       54)
00:10:42.398      9.253 -     9.304:   96.6330%  (       32)
00:10:42.398      9.304 -     9.356:   97.0937%  (       26)
00:10:42.398      9.356 -     9.407:   97.2001%  (        6)
00:10:42.398      9.407 -     9.459:   97.4127%  (       12)
00:10:42.398      9.459 -     9.510:   97.4659%  (        3)
00:10:42.398      9.510 -     9.561:   97.5191%  (        3)
00:10:42.398      9.561 -     9.613:   97.5899%  (        4)
00:10:42.398      9.613 -     9.664:   97.6254%  (        2)
00:10:42.398      9.664 -     9.716:   97.7317%  (        6)
00:10:42.398      9.716 -     9.767:   97.7849%  (        3)
00:10:42.398      9.767 -     9.818:   97.8558%  (        4)
00:10:42.398      9.818 -     9.870:   97.8912%  (        2)
00:10:42.398      9.870 -     9.921:   97.9444%  (        3)
00:10:42.398      9.921 -     9.973:   98.0330%  (        5)
00:10:42.398     10.024 -    10.076:   98.0507%  (        1)
00:10:42.398     10.076 -    10.127:   98.0861%  (        2)
00:10:42.398     10.127 -    10.178:   98.1038%  (        1)
00:10:42.398     10.178 -    10.230:   98.1393%  (        2)
00:10:42.398     11.206 -    11.258:   98.1570%  (        1)
00:10:42.398     11.258 -    11.309:   98.1925%  (        2)
00:10:42.398     11.309 -    11.361:   98.2102%  (        1)
00:10:42.398     11.412 -    11.463:   98.2279%  (        1)
00:10:42.398     12.080 -    12.132:   98.2456%  (        1)
00:10:42.398     12.235 -    12.286:   98.2633%  (        1)
00:10:42.398     12.903 -    12.954:   98.2811%  (        1)
00:10:42.398     13.160 -    13.263:   98.3519%  (        4)
00:10:42.398     13.263 -    13.365:   98.3874%  (        2)
00:10:42.398     13.365 -    13.468:   98.4405%  (        3)
00:10:42.398     13.468 -    13.571:   98.5114%  (        4)
00:10:42.398     13.571 -    13.674:   98.5469%  (        2)
00:10:42.398     13.674 -    13.777:   98.6355%  (        5)
00:10:42.398     13.777 -    13.880:   98.7241%  (        5)
00:10:42.398     13.880 -    13.982:   98.8304%  (        6)
00:10:42.398     13.982 -    14.085:   98.8659%  (        2)
00:10:42.398     14.085 -    14.188:   98.9190%  (        3)
00:10:42.398     14.188 -    14.291:   99.0076%  (        5)
00:10:42.398     14.291 -    14.394:   99.1494%  (        8)
00:10:42.398     14.394 -    14.496:   99.2203%  (        4)
00:10:42.398     14.496 -    14.599:   99.2734%  (        3)
00:10:42.398     14.599 -    14.702:   99.3266%  (        3)
00:10:42.398     14.702 -    14.805:   99.3798%  (        3)
00:10:42.398     14.805 -    14.908:   99.4506%  (        4)
00:10:42.398     14.908 -    15.010:   99.5038%  (        3)
00:10:42.398     15.010 -    15.113:   99.5393%  (        2)
00:10:42.398     15.113 -    15.216:   99.5924%  (        3)
00:10:42.398     15.319 -    15.422:   99.6279%  (        2)
00:10:42.398     16.244 -    16.347:   99.6456%  (        1)
00:10:42.398     16.861 -    16.964:   99.6633%  (        1)
00:10:42.398     17.067 -    17.169:   99.6810%  (        1)
00:10:42.398     18.300 -    18.403:   99.7165%  (        2)
00:10:42.398     19.329 -    19.431:   99.7342%  (        1)
00:10:42.398     19.431 -    19.534:   99.7519%  (        1)
00:10:42.398     19.534 -    19.637:   99.7873%  (        2)
00:10:42.398     19.740 -    19.843:   99.8051%  (        1)
00:10:42.398     20.665 -    20.768:   99.8228%  (        1)
00:10:42.398     21.076 -    21.179:   99.8405%  (        1)
00:10:42.398     21.282 -    21.385:   99.8582%  (        1)
00:10:42.398     23.749 -    23.852:   99.8760%  (        1)
00:10:42.398     24.161 -    24.263:   99.8937%  (        1)
00:10:42.398     26.320 -    26.525:   99.9114%  (        1)
00:10:42.398     32.900 -    33.105:   99.9291%  (        1)
00:10:42.398     36.190 -    36.395:   99.9468%  (        1)
00:10:42.398     48.938 -    49.144:   99.9646%  (        1)
00:10:42.398     58.397 -    58.808:   99.9823%  (        1)
00:10:42.398     79.370 -    79.782:  100.0000%  (        1)
00:10:42.398  
00:10:42.398  
00:10:42.398  real	0m1.286s
00:10:42.398  user	0m1.095s
00:10:42.398  sys	0m0.143s
00:10:42.398   16:21:11 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:42.398   16:21:11 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x
00:10:42.398  ************************************
00:10:42.398  END TEST nvme_overhead
00:10:42.398  ************************************
00:10:42.398   16:21:11 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:10:42.398   16:21:11 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:10:42.398   16:21:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:42.398   16:21:11 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:42.398  ************************************
00:10:42.398  START TEST nvme_arbitration
00:10:42.398  ************************************
00:10:42.398   16:21:11 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:10:46.594  Initializing NVMe Controllers
00:10:46.594  Attached to 0000:00:10.0
00:10:46.594  Attached to 0000:00:11.0
00:10:46.594  Attached to 0000:00:13.0
00:10:46.594  Attached to 0000:00:12.0
00:10:46.594  Associating QEMU NVMe Ctrl       (12340               ) with lcore 0
00:10:46.594  Associating QEMU NVMe Ctrl       (12341               ) with lcore 1
00:10:46.594  Associating QEMU NVMe Ctrl       (12343               ) with lcore 2
00:10:46.594  Associating QEMU NVMe Ctrl       (12342               ) with lcore 3
00:10:46.594  Associating QEMU NVMe Ctrl       (12342               ) with lcore 0
00:10:46.594  Associating QEMU NVMe Ctrl       (12342               ) with lcore 1
00:10:46.594  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:10:46.594  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:10:46.594  Initialization complete. Launching workers.
00:10:46.594  Starting thread on core 1 with urgent priority queue
00:10:46.594  Starting thread on core 2 with urgent priority queue
00:10:46.594  Starting thread on core 0 with urgent priority queue
00:10:46.594  Starting thread on core 3 with urgent priority queue
00:10:46.594  QEMU NVMe Ctrl       (12340               ) core 0:   448.00 IO/s   223.21 secs/100000 ios
00:10:46.594  QEMU NVMe Ctrl       (12342               ) core 0:   448.00 IO/s   223.21 secs/100000 ios
00:10:46.594  QEMU NVMe Ctrl       (12341               ) core 1:   448.00 IO/s   223.21 secs/100000 ios
00:10:46.594  QEMU NVMe Ctrl       (12342               ) core 1:   448.00 IO/s   223.21 secs/100000 ios
00:10:46.594  QEMU NVMe Ctrl       (12343               ) core 2:   938.67 IO/s   106.53 secs/100000 ios
00:10:46.594  QEMU NVMe Ctrl       (12342               ) core 3:   426.67 IO/s   234.38 secs/100000 ios
00:10:46.594  ========================================================
00:10:46.594  
00:10:46.594  
00:10:46.594  real	0m3.453s
00:10:46.594  user	0m9.451s
00:10:46.594  sys	0m0.164s
00:10:46.594   16:21:14 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:46.594   16:21:14 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x
00:10:46.594  ************************************
00:10:46.594  END TEST nvme_arbitration
00:10:46.594  ************************************
00:10:46.594   16:21:15 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:10:46.594   16:21:15 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:10:46.594   16:21:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:46.594   16:21:15 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:46.594  ************************************
00:10:46.594  START TEST nvme_single_aen
00:10:46.594  ************************************
00:10:46.594   16:21:15 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:10:46.594  Asynchronous Event Request test
00:10:46.594  Attached to 0000:00:10.0
00:10:46.594  Attached to 0000:00:11.0
00:10:46.594  Attached to 0000:00:13.0
00:10:46.594  Attached to 0000:00:12.0
00:10:46.594  Reset controller to setup AER completions for this process
00:10:46.594  Registering asynchronous event callbacks...
00:10:46.594  Getting orig temperature thresholds of all controllers
00:10:46.594  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:10:46.594  0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:10:46.594  0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:10:46.594  0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:10:46.594  Setting all controllers temperature threshold low to trigger AER
00:10:46.594  Waiting for all controllers temperature threshold to be set lower
00:10:46.594  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:10:46.595  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:10:46.595  0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:10:46.595  aer_cb - Resetting Temp Threshold for device: 0000:00:11.0
00:10:46.595  0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:10:46.595  aer_cb - Resetting Temp Threshold for device: 0000:00:13.0
00:10:46.595  0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:10:46.595  aer_cb - Resetting Temp Threshold for device: 0000:00:12.0
00:10:46.595  Waiting for all controllers to trigger AER and reset threshold
00:10:46.595  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:10:46.595  0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:10:46.595  0000:00:13.0: Current Temperature:         323 Kelvin (50 Celsius)
00:10:46.595  0000:00:12.0: Current Temperature:         323 Kelvin (50 Celsius)
00:10:46.595  Cleaning up...
00:10:46.595  
00:10:46.595  real	0m0.305s
00:10:46.595  user	0m0.109s
00:10:46.595  sys	0m0.146s
00:10:46.595   16:21:15 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:46.595   16:21:15 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x
00:10:46.595  ************************************
00:10:46.595  END TEST nvme_single_aen
00:10:46.595  ************************************
00:10:46.595   16:21:15 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:10:46.595   16:21:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:46.595   16:21:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:46.595   16:21:15 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:46.595  ************************************
00:10:46.595  START TEST nvme_doorbell_aers
00:10:46.595  ************************************
00:10:46.595   16:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers
00:10:46.595   16:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=()
00:10:46.595   16:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf
00:10:46.595   16:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:10:46.595    16:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:10:46.595    16:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=()
00:10:46.595    16:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs
00:10:46.595    16:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:10:46.595     16:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:10:46.595     16:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:10:46.595    16:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:10:46.595    16:21:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:10:46.595   16:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:10:46.595   16:21:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0'
00:10:46.854  [2024-12-09 16:21:15.840741] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:10:56.839  Executing: test_write_invalid_db
00:10:56.839  Waiting for AER completion...
00:10:56.839  Failure: test_write_invalid_db
00:10:56.839  
00:10:56.839  Executing: test_invalid_db_write_overflow_sq
00:10:56.839  Waiting for AER completion...
00:10:56.839  Failure: test_invalid_db_write_overflow_sq
00:10:56.839  
00:10:56.839  Executing: test_invalid_db_write_overflow_cq
00:10:56.839  Waiting for AER completion...
00:10:56.839  Failure: test_invalid_db_write_overflow_cq
00:10:56.839  
00:10:56.839   16:21:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:10:56.839   16:21:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0'
00:10:56.839  [2024-12-09 16:21:25.901866] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:06.819  Executing: test_write_invalid_db
00:11:06.819  Waiting for AER completion...
00:11:06.819  Failure: test_write_invalid_db
00:11:06.819  
00:11:06.819  Executing: test_invalid_db_write_overflow_sq
00:11:06.819  Waiting for AER completion...
00:11:06.819  Failure: test_invalid_db_write_overflow_sq
00:11:06.819  
00:11:06.819  Executing: test_invalid_db_write_overflow_cq
00:11:06.819  Waiting for AER completion...
00:11:06.819  Failure: test_invalid_db_write_overflow_cq
00:11:06.819  
00:11:06.819   16:21:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:11:06.819   16:21:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0'
00:11:06.819  [2024-12-09 16:21:35.947777] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:16.798  Executing: test_write_invalid_db
00:11:16.798  Waiting for AER completion...
00:11:16.798  Failure: test_write_invalid_db
00:11:16.798  
00:11:16.798  Executing: test_invalid_db_write_overflow_sq
00:11:16.798  Waiting for AER completion...
00:11:16.798  Failure: test_invalid_db_write_overflow_sq
00:11:16.798  
00:11:16.798  Executing: test_invalid_db_write_overflow_cq
00:11:16.798  Waiting for AER completion...
00:11:16.798  Failure: test_invalid_db_write_overflow_cq
00:11:16.798  
00:11:16.798   16:21:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:11:16.798   16:21:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0'
00:11:17.057  [2024-12-09 16:21:46.010890] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  Executing: test_write_invalid_db
00:11:27.042  Waiting for AER completion...
00:11:27.042  Failure: test_write_invalid_db
00:11:27.042  
00:11:27.042  Executing: test_invalid_db_write_overflow_sq
00:11:27.042  Waiting for AER completion...
00:11:27.042  Failure: test_invalid_db_write_overflow_sq
00:11:27.042  
00:11:27.042  Executing: test_invalid_db_write_overflow_cq
00:11:27.042  Waiting for AER completion...
00:11:27.042  Failure: test_invalid_db_write_overflow_cq
00:11:27.042  
00:11:27.042  
00:11:27.042  real	0m40.318s
00:11:27.042  user	0m28.309s
00:11:27.042  sys	0m11.646s
00:11:27.042   16:21:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:27.042   16:21:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x
00:11:27.042  ************************************
00:11:27.042  END TEST nvme_doorbell_aers
00:11:27.042  ************************************
00:11:27.042    16:21:55 nvme -- nvme/nvme.sh@97 -- # uname
00:11:27.042   16:21:55 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']'
00:11:27.042   16:21:55 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:11:27.042   16:21:55 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:11:27.042   16:21:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:27.042   16:21:55 nvme -- common/autotest_common.sh@10 -- # set +x
00:11:27.042  ************************************
00:11:27.042  START TEST nvme_multi_aen
00:11:27.042  ************************************
00:11:27.042   16:21:55 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:11:27.042  [2024-12-09 16:21:56.095418] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  [2024-12-09 16:21:56.095511] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  [2024-12-09 16:21:56.095528] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  [2024-12-09 16:21:56.097415] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  [2024-12-09 16:21:56.097459] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  [2024-12-09 16:21:56.097474] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  [2024-12-09 16:21:56.098843] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  [2024-12-09 16:21:56.098883] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  [2024-12-09 16:21:56.098909] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  [2024-12-09 16:21:56.100217] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  [2024-12-09 16:21:56.100254] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  [2024-12-09 16:21:56.100268] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65606) is not found. Dropping the request.
00:11:27.042  Child process pid: 66122
00:11:27.302  [Child] Asynchronous Event Request test
00:11:27.302  [Child] Attached to 0000:00:10.0
00:11:27.302  [Child] Attached to 0000:00:11.0
00:11:27.302  [Child] Attached to 0000:00:13.0
00:11:27.302  [Child] Attached to 0000:00:12.0
00:11:27.302  [Child] Registering asynchronous event callbacks...
00:11:27.302  [Child] Getting orig temperature thresholds of all controllers
00:11:27.302  [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:27.302  [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:27.302  [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:27.302  [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:27.302  [Child] Waiting for all controllers to trigger AER and reset threshold
00:11:27.302  [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:27.302  [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:27.302  [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:27.302  [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:27.302  [Child] 0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:27.302  [Child] 0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:27.302  [Child] 0000:00:13.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:27.302  [Child] 0000:00:12.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:27.302  [Child] Cleaning up...
00:11:27.302  Asynchronous Event Request test
00:11:27.302  Attached to 0000:00:10.0
00:11:27.302  Attached to 0000:00:11.0
00:11:27.302  Attached to 0000:00:13.0
00:11:27.302  Attached to 0000:00:12.0
00:11:27.302  Reset controller to setup AER completions for this process
00:11:27.302  Registering asynchronous event callbacks...
00:11:27.302  Getting orig temperature thresholds of all controllers
00:11:27.302  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:27.302  0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:27.302  0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:27.302  0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:27.302  Setting all controllers temperature threshold low to trigger AER
00:11:27.302  Waiting for all controllers temperature threshold to be set lower
00:11:27.302  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:27.302  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:11:27.302  0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:27.302  aer_cb - Resetting Temp Threshold for device: 0000:00:11.0
00:11:27.303  0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:27.303  aer_cb - Resetting Temp Threshold for device: 0000:00:13.0
00:11:27.303  0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:27.303  aer_cb - Resetting Temp Threshold for device: 0000:00:12.0
00:11:27.303  Waiting for all controllers to trigger AER and reset threshold
00:11:27.303  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:27.303  0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:27.303  0000:00:13.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:27.303  0000:00:12.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:27.303  Cleaning up...
00:11:27.303  ************************************
00:11:27.303  END TEST nvme_multi_aen
00:11:27.303  ************************************
00:11:27.303  
00:11:27.303  real	0m0.612s
00:11:27.303  user	0m0.199s
00:11:27.303  sys	0m0.297s
00:11:27.303   16:21:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:27.303   16:21:56 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x
00:11:27.562   16:21:56 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:11:27.562   16:21:56 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:11:27.562   16:21:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:27.562   16:21:56 nvme -- common/autotest_common.sh@10 -- # set +x
00:11:27.562  ************************************
00:11:27.562  START TEST nvme_startup
00:11:27.562  ************************************
00:11:27.562   16:21:56 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:11:27.822  Initializing NVMe Controllers
00:11:27.822  Attached to 0000:00:10.0
00:11:27.822  Attached to 0000:00:11.0
00:11:27.822  Attached to 0000:00:13.0
00:11:27.822  Attached to 0000:00:12.0
00:11:27.822  Initialization complete.
00:11:27.822  Time used:190741.875      (us).
00:11:27.822  ************************************
00:11:27.822  END TEST nvme_startup
00:11:27.822  ************************************
00:11:27.822  
00:11:27.822  real	0m0.288s
00:11:27.822  user	0m0.105s
00:11:27.822  sys	0m0.139s
00:11:27.822   16:21:56 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:27.822   16:21:56 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x
00:11:27.822   16:21:56 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary
00:11:27.822   16:21:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:27.822   16:21:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:27.822   16:21:56 nvme -- common/autotest_common.sh@10 -- # set +x
00:11:27.822  ************************************
00:11:27.822  START TEST nvme_multi_secondary
00:11:27.822  ************************************
00:11:27.822   16:21:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary
00:11:27.822   16:21:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=66178
00:11:27.822   16:21:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1
00:11:27.822   16:21:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=66179
00:11:27.822   16:21:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4
00:11:27.822   16:21:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:11:32.014  Initializing NVMe Controllers
00:11:32.014  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:32.014  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:32.014  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:32.014  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:32.014  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:11:32.014  Associating PCIE (0000:00:11.0) NSID 1 with lcore 2
00:11:32.014  Associating PCIE (0000:00:13.0) NSID 1 with lcore 2
00:11:32.014  Associating PCIE (0000:00:12.0) NSID 1 with lcore 2
00:11:32.014  Associating PCIE (0000:00:12.0) NSID 2 with lcore 2
00:11:32.014  Associating PCIE (0000:00:12.0) NSID 3 with lcore 2
00:11:32.014  Initialization complete. Launching workers.
00:11:32.014  ========================================================
00:11:32.014                                                                             Latency(us)
00:11:32.014  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:32.014  PCIE (0000:00:10.0) NSID 1 from core  2:    2652.71      10.36    6029.13    1436.54   14712.85
00:11:32.014  PCIE (0000:00:11.0) NSID 1 from core  2:    2652.71      10.36    6029.86    1200.09   15981.69
00:11:32.014  PCIE (0000:00:13.0) NSID 1 from core  2:    2652.71      10.36    6029.55    1288.42   13331.10
00:11:32.014  PCIE (0000:00:12.0) NSID 1 from core  2:    2652.71      10.36    6027.42    1353.66   13043.71
00:11:32.014  PCIE (0000:00:12.0) NSID 2 from core  2:    2652.71      10.36    6021.73    1422.16   13374.92
00:11:32.014  PCIE (0000:00:12.0) NSID 3 from core  2:    2652.71      10.36    6022.00    1530.89   13272.00
00:11:32.014  ========================================================
00:11:32.014  Total                                  :   15916.29      62.17    6026.62    1200.09   15981.69
00:11:32.014  
00:11:32.014   16:22:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 66178
00:11:32.014  Initializing NVMe Controllers
00:11:32.014  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:32.014  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:32.014  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:32.014  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:32.014  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:11:32.014  Associating PCIE (0000:00:11.0) NSID 1 with lcore 1
00:11:32.014  Associating PCIE (0000:00:13.0) NSID 1 with lcore 1
00:11:32.014  Associating PCIE (0000:00:12.0) NSID 1 with lcore 1
00:11:32.014  Associating PCIE (0000:00:12.0) NSID 2 with lcore 1
00:11:32.014  Associating PCIE (0000:00:12.0) NSID 3 with lcore 1
00:11:32.014  Initialization complete. Launching workers.
00:11:32.014  ========================================================
00:11:32.014                                                                             Latency(us)
00:11:32.014  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:32.014  PCIE (0000:00:10.0) NSID 1 from core  1:    4987.28      19.48    3205.68    1545.32    8883.72
00:11:32.014  PCIE (0000:00:11.0) NSID 1 from core  1:    4987.28      19.48    3207.60    1574.77    9055.33
00:11:32.014  PCIE (0000:00:13.0) NSID 1 from core  1:    4987.28      19.48    3207.56    1539.68    9572.33
00:11:32.014  PCIE (0000:00:12.0) NSID 1 from core  1:    4987.28      19.48    3207.61    1525.48    7916.45
00:11:32.014  PCIE (0000:00:12.0) NSID 2 from core  1:    4987.28      19.48    3207.65    1403.59    9079.15
00:11:32.014  PCIE (0000:00:12.0) NSID 3 from core  1:    4987.28      19.48    3207.84    1595.34    8941.15
00:11:32.014  ========================================================
00:11:32.014  Total                                  :   29923.66     116.89    3207.32    1403.59    9572.33
00:11:32.014  
00:11:33.389  Initializing NVMe Controllers
00:11:33.389  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:33.389  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:33.389  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:33.389  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:33.389  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:11:33.389  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:11:33.389  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:11:33.389  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:11:33.389  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:11:33.389  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:11:33.389  Initialization complete. Launching workers.
00:11:33.389  ========================================================
00:11:33.389                                                                             Latency(us)
00:11:33.389  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:33.389  PCIE (0000:00:10.0) NSID 1 from core  0:    8447.48      33.00    1892.63     940.49    9387.02
00:11:33.389  PCIE (0000:00:11.0) NSID 1 from core  0:    8447.48      33.00    1893.59     956.89    9383.10
00:11:33.389  PCIE (0000:00:13.0) NSID 1 from core  0:    8447.48      33.00    1893.57     853.11    9712.59
00:11:33.389  PCIE (0000:00:12.0) NSID 1 from core  0:    8447.48      33.00    1893.55     810.20    9543.46
00:11:33.389  PCIE (0000:00:12.0) NSID 2 from core  0:    8447.48      33.00    1893.52     758.68    9635.95
00:11:33.389  PCIE (0000:00:12.0) NSID 3 from core  0:    8450.68      33.01    1892.77     696.84    9372.87
00:11:33.389  ========================================================
00:11:33.389  Total                                  :   50688.10     198.00    1893.27     696.84    9712.59
00:11:33.389  
00:11:33.389   16:22:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 66179
00:11:33.389   16:22:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=66254
00:11:33.389   16:22:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1
00:11:33.389   16:22:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=66255
00:11:33.389   16:22:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:11:33.389   16:22:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4
00:11:36.676  Initializing NVMe Controllers
00:11:36.676  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:36.676  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:36.676  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:36.676  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:36.676  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:11:36.676  Associating PCIE (0000:00:11.0) NSID 1 with lcore 1
00:11:36.676  Associating PCIE (0000:00:13.0) NSID 1 with lcore 1
00:11:36.676  Associating PCIE (0000:00:12.0) NSID 1 with lcore 1
00:11:36.676  Associating PCIE (0000:00:12.0) NSID 2 with lcore 1
00:11:36.676  Associating PCIE (0000:00:12.0) NSID 3 with lcore 1
00:11:36.676  Initialization complete. Launching workers.
00:11:36.676  ========================================================
00:11:36.676                                                                             Latency(us)
00:11:36.676  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:36.676  PCIE (0000:00:10.0) NSID 1 from core  1:    5177.30      20.22    3088.19    1038.89    6165.87
00:11:36.676  PCIE (0000:00:11.0) NSID 1 from core  1:    5177.30      20.22    3089.96    1053.54    6451.59
00:11:36.676  PCIE (0000:00:13.0) NSID 1 from core  1:    5177.30      20.22    3090.31    1060.26    5798.69
00:11:36.676  PCIE (0000:00:12.0) NSID 1 from core  1:    5177.30      20.22    3090.39    1071.73    5611.63
00:11:36.676  PCIE (0000:00:12.0) NSID 2 from core  1:    5177.30      20.22    3090.50    1065.07    5472.49
00:11:36.676  PCIE (0000:00:12.0) NSID 3 from core  1:    5182.63      20.24    3087.38    1034.66    5844.08
00:11:36.676  ========================================================
00:11:36.676  Total                                  :   31069.12     121.36    3089.46    1034.66    6451.59
00:11:36.676  
00:11:36.676  Initializing NVMe Controllers
00:11:36.676  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:36.676  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:36.676  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:36.676  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:36.676  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:11:36.676  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:11:36.676  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:11:36.676  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:11:36.676  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:11:36.676  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:11:36.676  Initialization complete. Launching workers.
00:11:36.676  ========================================================
00:11:36.676                                                                             Latency(us)
00:11:36.676  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:36.676  PCIE (0000:00:10.0) NSID 1 from core  0:    5039.13      19.68    3172.73    1061.40   13077.57
00:11:36.676  PCIE (0000:00:11.0) NSID 1 from core  0:    5039.13      19.68    3174.37    1064.36   11442.78
00:11:36.676  PCIE (0000:00:13.0) NSID 1 from core  0:    5039.13      19.68    3174.31     950.92   11453.71
00:11:36.676  PCIE (0000:00:12.0) NSID 1 from core  0:    5039.13      19.68    3174.26     941.76   11803.50
00:11:36.676  PCIE (0000:00:12.0) NSID 2 from core  0:    5039.13      19.68    3174.20     898.18   12645.04
00:11:36.676  PCIE (0000:00:12.0) NSID 3 from core  0:    5039.13      19.68    3174.14     887.96   12901.33
00:11:36.676  ========================================================
00:11:36.676  Total                                  :   30234.80     118.10    3174.00     887.96   13077.57
00:11:36.676  
00:11:39.213  Initializing NVMe Controllers
00:11:39.213  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:39.213  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:39.213  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:39.213  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:39.213  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:11:39.213  Associating PCIE (0000:00:11.0) NSID 1 with lcore 2
00:11:39.213  Associating PCIE (0000:00:13.0) NSID 1 with lcore 2
00:11:39.213  Associating PCIE (0000:00:12.0) NSID 1 with lcore 2
00:11:39.213  Associating PCIE (0000:00:12.0) NSID 2 with lcore 2
00:11:39.213  Associating PCIE (0000:00:12.0) NSID 3 with lcore 2
00:11:39.213  Initialization complete. Launching workers.
00:11:39.213  ========================================================
00:11:39.213                                                                             Latency(us)
00:11:39.213  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:39.213  PCIE (0000:00:10.0) NSID 1 from core  2:    2680.50      10.47    5966.57    1094.21   13397.78
00:11:39.213  PCIE (0000:00:11.0) NSID 1 from core  2:    2680.50      10.47    5968.30    1140.55   15184.84
00:11:39.213  PCIE (0000:00:13.0) NSID 1 from core  2:    2680.50      10.47    5971.54    1094.30   12794.77
00:11:39.213  PCIE (0000:00:12.0) NSID 1 from core  2:    2680.50      10.47    5972.71    1154.05   13110.04
00:11:39.213  PCIE (0000:00:12.0) NSID 2 from core  2:    2680.50      10.47    5973.23    1163.07   13461.24
00:11:39.213  PCIE (0000:00:12.0) NSID 3 from core  2:    2680.50      10.47    5973.19    1170.94   13503.93
00:11:39.213  ========================================================
00:11:39.213  Total                                  :   16083.02      62.82    5970.92    1094.21   15184.84
00:11:39.213  
00:11:39.213  ************************************
00:11:39.213  END TEST nvme_multi_secondary
00:11:39.213  ************************************
00:11:39.213   16:22:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 66254
00:11:39.213   16:22:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 66255
00:11:39.213  
00:11:39.213  real	0m10.996s
00:11:39.213  user	0m18.494s
00:11:39.213  sys	0m1.063s
00:11:39.213   16:22:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:39.213   16:22:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x
00:11:39.213   16:22:07 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT
00:11:39.213   16:22:07 nvme -- nvme/nvme.sh@102 -- # kill_stub
00:11:39.213   16:22:07 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/65192 ]]
00:11:39.213   16:22:07 nvme -- common/autotest_common.sh@1094 -- # kill 65192
00:11:39.213   16:22:07 nvme -- common/autotest_common.sh@1095 -- # wait 65192
00:11:39.213  [2024-12-09 16:22:07.946414] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.946812] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.946923] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.946976] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.953110] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.953182] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.953211] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.953240] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.957541] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.957610] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.957638] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.957667] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.961838] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.961928] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.961957] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213  [2024-12-09 16:22:07.961986] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66121) is not found. Dropping the request.
00:11:39.213   16:22:08 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0
00:11:39.213   16:22:08 nvme -- common/autotest_common.sh@1101 -- # echo 2
00:11:39.213   16:22:08 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:11:39.213   16:22:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:39.213   16:22:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:39.213   16:22:08 nvme -- common/autotest_common.sh@10 -- # set +x
00:11:39.213  ************************************
00:11:39.213  START TEST bdev_nvme_reset_stuck_adm_cmd
00:11:39.213  ************************************
00:11:39.213   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:11:39.213  * Looking for test storage...
00:11:39.213  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:39.213     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version
00:11:39.213     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-:
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-:
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<'
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:39.213     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1
00:11:39.213     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1
00:11:39.213     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:39.213     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1
00:11:39.213     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2
00:11:39.213     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2
00:11:39.213     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:39.213     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:39.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.213  		--rc genhtml_branch_coverage=1
00:11:39.213  		--rc genhtml_function_coverage=1
00:11:39.213  		--rc genhtml_legend=1
00:11:39.213  		--rc geninfo_all_blocks=1
00:11:39.213  		--rc geninfo_unexecuted_blocks=1
00:11:39.213  		
00:11:39.213  		'
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:39.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.213  		--rc genhtml_branch_coverage=1
00:11:39.213  		--rc genhtml_function_coverage=1
00:11:39.213  		--rc genhtml_legend=1
00:11:39.213  		--rc geninfo_all_blocks=1
00:11:39.213  		--rc geninfo_unexecuted_blocks=1
00:11:39.213  		
00:11:39.213  		'
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:39.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.213  		--rc genhtml_branch_coverage=1
00:11:39.213  		--rc genhtml_function_coverage=1
00:11:39.213  		--rc genhtml_legend=1
00:11:39.213  		--rc geninfo_all_blocks=1
00:11:39.213  		--rc geninfo_unexecuted_blocks=1
00:11:39.213  		
00:11:39.213  		'
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:39.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:39.213  		--rc genhtml_branch_coverage=1
00:11:39.213  		--rc genhtml_function_coverage=1
00:11:39.213  		--rc genhtml_legend=1
00:11:39.213  		--rc geninfo_all_blocks=1
00:11:39.213  		--rc geninfo_unexecuted_blocks=1
00:11:39.213  		
00:11:39.213  		'
00:11:39.213   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:11:39.213   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:11:39.213   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:11:39.213   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:11:39.213   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=()
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs
00:11:39.213    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:11:39.473     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:11:39.473     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=()
00:11:39.473     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs
00:11:39.473     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:11:39.473      16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:11:39.473      16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:11:39.473     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:11:39.473     16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:11:39.473    16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']'
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66421
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66421
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 66421 ']'
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:39.473  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:39.473   16:22:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:39.473  [2024-12-09 16:22:08.613715] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:11:39.473  [2024-12-09 16:22:08.613846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66421 ]
00:11:39.732  [2024-12-09 16:22:08.809942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:39.991  [2024-12-09 16:22:08.922171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:39.991  [2024-12-09 16:22:08.922350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:11:39.991  [2024-12-09 16:22:08.922574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:11:39.991  [2024-12-09 16:22:08.922945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:40.928  nvme0n1
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:40.928    16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_MDHTc.txt
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:40.928  true
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:40.928    16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733761329
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66444
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:11:40.928   16:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:42.831  [2024-12-09 16:22:11.864588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:11:42.831  [2024-12-09 16:22:11.865311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:11:42.831  [2024-12-09 16:22:11.865361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:11:42.831  [2024-12-09 16:22:11.865378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:42.831  [2024-12-09 16:22:11.867461] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:11:42.831  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66444
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66444
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66444
00:11:42.831    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:11:42.831    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_MDHTc.txt
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:11:42.831    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:11:42.831    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:11:42.831    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:11:42.831     16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:11:42.831     16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:11:42.831      16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:11:42.831    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:11:42.831    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:11:42.831   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:11:42.832    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:11:42.832    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:11:42.832    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:11:42.832     16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:11:42.832     16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:11:42.832      16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:11:42.832    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:11:42.832    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:11:42.832   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:11:42.832   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_MDHTc.txt
00:11:42.832   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66421
00:11:42.832   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 66421 ']'
00:11:42.832   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 66421
00:11:42.832    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname
00:11:42.832   16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:42.832    16:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66421
00:11:43.091   16:22:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:43.091   16:22:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:43.091   16:22:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66421'
00:11:43.091  killing process with pid 66421
00:11:43.091   16:22:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 66421
00:11:43.091   16:22:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 66421
00:11:45.690   16:22:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:11:45.690   16:22:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:11:45.690  
00:11:45.690  real	0m6.208s
00:11:45.690  user	0m21.519s
00:11:45.690  sys	0m0.831s
00:11:45.690   16:22:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:45.690  ************************************
00:11:45.690  END TEST bdev_nvme_reset_stuck_adm_cmd
00:11:45.690   16:22:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:45.690  ************************************
00:11:45.690   16:22:14 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]]
00:11:45.690   16:22:14 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:11:45.690   16:22:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:45.690   16:22:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:45.690   16:22:14 nvme -- common/autotest_common.sh@10 -- # set +x
00:11:45.690  ************************************
00:11:45.690  START TEST nvme_fio
00:11:45.690  ************************************
00:11:45.690   16:22:14 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test
00:11:45.690   16:22:14 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:11:45.690   16:22:14 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false
00:11:45.690    16:22:14 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:11:45.690    16:22:14 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=()
00:11:45.690    16:22:14 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs
00:11:45.690    16:22:14 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:11:45.690     16:22:14 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:11:45.690     16:22:14 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:11:45.690    16:22:14 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:11:45.690    16:22:14 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:11:45.690   16:22:14 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0')
00:11:45.690   16:22:14 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf
00:11:45.690   16:22:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:11:45.690   16:22:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:11:45.690   16:22:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:11:45.690   16:22:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:11:45.690   16:22:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:11:46.259   16:22:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:11:46.259   16:22:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:11:46.259    16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:46.259    16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:11:46.259    16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:11:46.259   16:22:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:11:46.259  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:11:46.259  fio-3.35
00:11:46.259  Starting 1 thread
00:11:50.458  
00:11:50.458  test: (groupid=0, jobs=1): err= 0: pid=66592: Mon Dec  9 16:22:18 2024
00:11:50.458    read: IOPS=22.4k, BW=87.3MiB/s (91.6MB/s)(175MiB/2001msec)
00:11:50.458      slat (usec): min=4, max=105, avg= 4.71, stdev= 1.20
00:11:50.458      clat (usec): min=166, max=11558, avg=2851.28, stdev=314.62
00:11:50.458       lat (usec): min=171, max=11659, avg=2855.99, stdev=315.15
00:11:50.458      clat percentiles (usec):
00:11:50.458       |  1.00th=[ 2573],  5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737],
00:11:50.458       | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2802], 60.00th=[ 2835],
00:11:50.458       | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3064],
00:11:50.458       | 99.00th=[ 3982], 99.50th=[ 5014], 99.90th=[ 6194], 99.95th=[ 8455],
00:11:50.458       | 99.99th=[11207]
00:11:50.458     bw (  KiB/s): min=84552, max=90448, per=98.86%, avg=88384.00, stdev=3321.91, samples=3
00:11:50.458     iops        : min=21138, max=22612, avg=22095.33, stdev=829.95, samples=3
00:11:50.458    write: IOPS=22.2k, BW=86.7MiB/s (90.9MB/s)(173MiB/2001msec); 0 zone resets
00:11:50.458      slat (nsec): min=4239, max=34347, avg=5063.99, stdev=962.05
00:11:50.458      clat (usec): min=237, max=11372, avg=2866.80, stdev=321.43
00:11:50.458       lat (usec): min=243, max=11397, avg=2871.86, stdev=321.92
00:11:50.458      clat percentiles (usec):
00:11:50.458       |  1.00th=[ 2573],  5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2737],
00:11:50.458       | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868],
00:11:50.458       | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3097],
00:11:50.458       | 99.00th=[ 4047], 99.50th=[ 5014], 99.90th=[ 6456], 99.95th=[ 8848],
00:11:50.458       | 99.99th=[10945]
00:11:50.458     bw (  KiB/s): min=84456, max=90872, per=99.69%, avg=88509.33, stdev=3526.33, samples=3
00:11:50.458     iops        : min=21114, max=22718, avg=22127.33, stdev=881.58, samples=3
00:11:50.458    lat (usec)   : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01%
00:11:50.458    lat (msec)   : 2=0.05%, 4=98.89%, 10=0.99%, 20=0.03%
00:11:50.458    cpu          : usr=99.30%, sys=0.15%, ctx=4, majf=0, minf=608
00:11:50.458    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:11:50.458       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:50.458       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:50.458       issued rwts: total=44725,44415,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:50.458       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:50.458  
00:11:50.458  Run status group 0 (all jobs):
00:11:50.458     READ: bw=87.3MiB/s (91.6MB/s), 87.3MiB/s-87.3MiB/s (91.6MB/s-91.6MB/s), io=175MiB (183MB), run=2001-2001msec
00:11:50.458    WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=173MiB (182MB), run=2001-2001msec
00:11:50.458  -----------------------------------------------------
00:11:50.458  Suppressions used:
00:11:50.458    count      bytes template
00:11:50.458        1         32 /usr/src/fio/parse.c
00:11:50.458        1          8 libtcmalloc_minimal.so
00:11:50.458  -----------------------------------------------------
00:11:50.458  
00:11:50.458   16:22:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:11:50.458   16:22:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:11:50.458   16:22:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0'
00:11:50.458   16:22:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:11:50.458   16:22:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0'
00:11:50.458   16:22:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:11:50.458   16:22:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:11:50.458   16:22:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:11:50.458    16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:50.458    16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:11:50.458    16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:11:50.458   16:22:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:11:50.721  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:11:50.721  fio-3.35
00:11:50.721  Starting 1 thread
00:11:54.914  
00:11:54.914  test: (groupid=0, jobs=1): err= 0: pid=66663: Mon Dec  9 16:22:23 2024
00:11:54.914    read: IOPS=23.3k, BW=91.1MiB/s (95.5MB/s)(182MiB/2001msec)
00:11:54.914      slat (nsec): min=3745, max=74876, avg=4331.82, stdev=932.26
00:11:54.914      clat (usec): min=193, max=10953, avg=2734.94, stdev=333.95
00:11:54.914       lat (usec): min=197, max=11024, avg=2739.27, stdev=334.25
00:11:54.914      clat percentiles (usec):
00:11:54.914       |  1.00th=[ 2245],  5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2540],
00:11:54.914       | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2769],
00:11:54.914       | 70.00th=[ 2835], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3032],
00:11:54.914       | 99.00th=[ 3949], 99.50th=[ 4490], 99.90th=[ 5800], 99.95th=[ 8094],
00:11:54.914       | 99.99th=[10683]
00:11:54.914     bw (  KiB/s): min=84752, max=98032, per=97.53%, avg=90992.00, stdev=6676.05, samples=3
00:11:54.914     iops        : min=21188, max=24508, avg=22748.00, stdev=1669.01, samples=3
00:11:54.914    write: IOPS=23.2k, BW=90.5MiB/s (94.9MB/s)(181MiB/2001msec); 0 zone resets
00:11:54.914      slat (nsec): min=3845, max=29695, avg=4612.82, stdev=877.26
00:11:54.914      clat (usec): min=200, max=10680, avg=2746.01, stdev=341.46
00:11:54.914       lat (usec): min=204, max=10702, avg=2750.62, stdev=341.76
00:11:54.914      clat percentiles (usec):
00:11:54.914       |  1.00th=[ 2278],  5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2540],
00:11:54.914       | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2802],
00:11:54.914       | 70.00th=[ 2835], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3032],
00:11:54.914       | 99.00th=[ 3982], 99.50th=[ 4490], 99.90th=[ 6325], 99.95th=[ 8455],
00:11:54.914       | 99.99th=[10421]
00:11:54.914     bw (  KiB/s): min=84592, max=98928, per=98.40%, avg=91178.67, stdev=7238.37, samples=3
00:11:54.914     iops        : min=21148, max=24732, avg=22794.67, stdev=1809.59, samples=3
00:11:54.914    lat (usec)   : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01%
00:11:54.914    lat (msec)   : 2=0.33%, 4=98.68%, 10=0.93%, 20=0.02%
00:11:54.914    cpu          : usr=99.45%, sys=0.00%, ctx=5, majf=0, minf=608
00:11:54.914    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:11:54.914       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:54.914       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:54.914       issued rwts: total=46670,46353,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:54.914       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:54.914  
00:11:54.914  Run status group 0 (all jobs):
00:11:54.914     READ: bw=91.1MiB/s (95.5MB/s), 91.1MiB/s-91.1MiB/s (95.5MB/s-95.5MB/s), io=182MiB (191MB), run=2001-2001msec
00:11:54.914    WRITE: bw=90.5MiB/s (94.9MB/s), 90.5MiB/s-90.5MiB/s (94.9MB/s-94.9MB/s), io=181MiB (190MB), run=2001-2001msec
00:11:54.914  -----------------------------------------------------
00:11:54.914  Suppressions used:
00:11:54.914    count      bytes template
00:11:54.914        1         32 /usr/src/fio/parse.c
00:11:54.914        1          8 libtcmalloc_minimal.so
00:11:54.914  -----------------------------------------------------
00:11:54.914  
00:11:54.914   16:22:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:11:54.914   16:22:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:11:54.914   16:22:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:11:54.914   16:22:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0'
00:11:54.914   16:22:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0'
00:11:54.914   16:22:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:11:55.483   16:22:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:11:55.483   16:22:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:11:55.483    16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:55.483    16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:11:55.483    16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:11:55.483   16:22:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096
00:11:55.483  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:11:55.483  fio-3.35
00:11:55.483  Starting 1 thread
00:11:59.678  
00:11:59.678  test: (groupid=0, jobs=1): err= 0: pid=66729: Mon Dec  9 16:22:28 2024
00:11:59.678    read: IOPS=23.9k, BW=93.5MiB/s (98.0MB/s)(187MiB/2001msec)
00:11:59.678      slat (nsec): min=3752, max=76791, avg=4312.01, stdev=1158.85
00:11:59.678      clat (usec): min=995, max=10907, avg=2663.87, stdev=363.61
00:11:59.678       lat (usec): min=1000, max=10984, avg=2668.18, stdev=363.99
00:11:59.678      clat percentiles (usec):
00:11:59.678       |  1.00th=[ 2180],  5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2474],
00:11:59.678       | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671],
00:11:59.678       | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 2933], 95.00th=[ 3195],
00:11:59.678       | 99.00th=[ 3884], 99.50th=[ 4359], 99.90th=[ 5997], 99.95th=[ 8029],
00:11:59.678       | 99.99th=[10683]
00:11:59.678     bw (  KiB/s): min=90472, max=99448, per=98.66%, avg=94464.00, stdev=4569.48, samples=3
00:11:59.678     iops        : min=22618, max=24862, avg=23616.00, stdev=1142.37, samples=3
00:11:59.678    write: IOPS=23.8k, BW=92.9MiB/s (97.4MB/s)(186MiB/2001msec); 0 zone resets
00:11:59.678      slat (nsec): min=3836, max=44743, avg=4547.78, stdev=1137.54
00:11:59.678      clat (usec): min=1033, max=10808, avg=2675.70, stdev=371.77
00:11:59.678       lat (usec): min=1037, max=10829, avg=2680.24, stdev=372.14
00:11:59.678      clat percentiles (usec):
00:11:59.678       |  1.00th=[ 2212],  5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2474],
00:11:59.678       | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671],
00:11:59.678       | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2933], 95.00th=[ 3195],
00:11:59.678       | 99.00th=[ 3949], 99.50th=[ 4490], 99.90th=[ 6325], 99.95th=[ 8356],
00:11:59.678       | 99.99th=[10421]
00:11:59.678     bw (  KiB/s): min=89368, max=100752, per=99.26%, avg=94448.00, stdev=5789.86, samples=3
00:11:59.678     iops        : min=22342, max=25188, avg=23612.00, stdev=1447.47, samples=3
00:11:59.678    lat (usec)   : 1000=0.01%
00:11:59.678    lat (msec)   : 2=0.26%, 4=98.88%, 10=0.84%, 20=0.02%
00:11:59.678    cpu          : usr=99.45%, sys=0.05%, ctx=7, majf=0, minf=609
00:11:59.678    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:11:59.678       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:59.678       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:59.678       issued rwts: total=47896,47598,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:59.678       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:59.678  
00:11:59.678  Run status group 0 (all jobs):
00:11:59.678     READ: bw=93.5MiB/s (98.0MB/s), 93.5MiB/s-93.5MiB/s (98.0MB/s-98.0MB/s), io=187MiB (196MB), run=2001-2001msec
00:11:59.678    WRITE: bw=92.9MiB/s (97.4MB/s), 92.9MiB/s-92.9MiB/s (97.4MB/s-97.4MB/s), io=186MiB (195MB), run=2001-2001msec
00:11:59.938  -----------------------------------------------------
00:11:59.938  Suppressions used:
00:11:59.938    count      bytes template
00:11:59.938        1         32 /usr/src/fio/parse.c
00:11:59.938        1          8 libtcmalloc_minimal.so
00:11:59.938  -----------------------------------------------------
00:11:59.938  
00:11:59.938   16:22:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:11:59.938   16:22:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:11:59.938   16:22:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0'
00:11:59.938   16:22:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:12:00.197   16:22:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0'
00:12:00.197   16:22:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:12:00.457   16:22:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:12:00.457   16:22:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:12:00.457    16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:12:00.457    16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:12:00.457    16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:12:00.457   16:22:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096
00:12:00.716  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:12:00.716  fio-3.35
00:12:00.716  Starting 1 thread
00:12:05.992  
00:12:05.992  test: (groupid=0, jobs=1): err= 0: pid=66795: Mon Dec  9 16:22:35 2024
00:12:05.992    read: IOPS=22.9k, BW=89.3MiB/s (93.7MB/s)(179MiB/2001msec)
00:12:05.992      slat (nsec): min=3667, max=82092, avg=4418.16, stdev=1242.76
00:12:05.992      clat (usec): min=214, max=11371, avg=2791.18, stdev=537.42
00:12:05.992       lat (usec): min=218, max=11453, avg=2795.60, stdev=538.07
00:12:05.992      clat percentiles (usec):
00:12:05.992       |  1.00th=[ 2212],  5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2540],
00:12:05.992       | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2802],
00:12:05.992       | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3163],
00:12:05.992       | 99.00th=[ 5407], 99.50th=[ 6456], 99.90th=[ 8455], 99.95th=[ 9110],
00:12:05.992       | 99.99th=[10945]
00:12:05.992     bw (  KiB/s): min=81968, max=99024, per=96.83%, avg=88570.67, stdev=9156.83, samples=3
00:12:05.992     iops        : min=20492, max=24756, avg=22142.67, stdev=2289.21, samples=3
00:12:05.992    write: IOPS=22.7k, BW=88.8MiB/s (93.1MB/s)(178MiB/2001msec); 0 zone resets
00:12:05.992      slat (usec): min=3, max=176, avg= 4.66, stdev= 1.56
00:12:05.992      clat (usec): min=228, max=11216, avg=2798.02, stdev=541.14
00:12:05.992       lat (usec): min=233, max=11249, avg=2802.68, stdev=541.79
00:12:05.992      clat percentiles (usec):
00:12:05.992       |  1.00th=[ 2245],  5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2540],
00:12:05.992       | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2802],
00:12:05.992       | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3163],
00:12:05.992       | 99.00th=[ 5407], 99.50th=[ 6652], 99.90th=[ 8455], 99.95th=[ 8979],
00:12:05.992       | 99.99th=[10814]
00:12:05.992     bw (  KiB/s): min=81672, max=99488, per=97.63%, avg=88754.67, stdev=9452.41, samples=3
00:12:05.992     iops        : min=20418, max=24872, avg=22188.67, stdev=2363.10, samples=3
00:12:05.992    lat (usec)   : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
00:12:05.992    lat (msec)   : 2=0.29%, 4=97.24%, 10=2.40%, 20=0.03%
00:12:05.992    cpu          : usr=98.85%, sys=0.25%, ctx=25, majf=0, minf=606
00:12:05.992    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:12:05.992       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:05.992       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:12:05.992       issued rwts: total=45757,45478,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:05.992       latency   : target=0, window=0, percentile=100.00%, depth=128
00:12:05.992  
00:12:05.992  Run status group 0 (all jobs):
00:12:05.992     READ: bw=89.3MiB/s (93.7MB/s), 89.3MiB/s-89.3MiB/s (93.7MB/s-93.7MB/s), io=179MiB (187MB), run=2001-2001msec
00:12:05.992    WRITE: bw=88.8MiB/s (93.1MB/s), 88.8MiB/s-88.8MiB/s (93.1MB/s-93.1MB/s), io=178MiB (186MB), run=2001-2001msec
00:12:06.251  -----------------------------------------------------
00:12:06.251  Suppressions used:
00:12:06.251    count      bytes template
00:12:06.251        1         32 /usr/src/fio/parse.c
00:12:06.251        1          8 libtcmalloc_minimal.so
00:12:06.251  -----------------------------------------------------
00:12:06.251  
00:12:06.251   16:22:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:12:06.251   16:22:35 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true
00:12:06.251  
00:12:06.251  real	0m20.904s
00:12:06.251  user	0m15.116s
00:12:06.251  sys	0m7.825s
00:12:06.251   16:22:35 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:06.251   16:22:35 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:12:06.251  ************************************
00:12:06.251  END TEST nvme_fio
00:12:06.251  ************************************
00:12:06.251  ************************************
00:12:06.251  END TEST nvme
00:12:06.251  ************************************
00:12:06.251  
00:12:06.251  real	1m35.845s
00:12:06.251  user	3m42.126s
00:12:06.251  sys	0m27.329s
00:12:06.251   16:22:35 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:06.251   16:22:35 nvme -- common/autotest_common.sh@10 -- # set +x
00:12:06.519   16:22:35  -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]]
00:12:06.519   16:22:35  -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:12:06.519   16:22:35  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:06.519   16:22:35  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:06.519   16:22:35  -- common/autotest_common.sh@10 -- # set +x
00:12:06.519  ************************************
00:12:06.519  START TEST nvme_scc
00:12:06.519  ************************************
00:12:06.519   16:22:35 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:12:06.519  * Looking for test storage...
00:12:06.519  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:12:06.519     16:22:35 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:06.519      16:22:35 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version
00:12:06.519      16:22:35 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:06.778     16:22:35 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@336 -- # IFS=.-:
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@337 -- # IFS=.-:
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@338 -- # local 'op=<'
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@344 -- # case "$op" in
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@345 -- # : 1
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@365 -- # decimal 1
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@353 -- # local d=1
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@355 -- # echo 1
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@366 -- # decimal 2
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@353 -- # local d=2
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@355 -- # echo 2
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:06.779     16:22:35 nvme_scc -- scripts/common.sh@368 -- # return 0
00:12:06.779     16:22:35 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:06.779     16:22:35 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:06.779  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.779  		--rc genhtml_branch_coverage=1
00:12:06.779  		--rc genhtml_function_coverage=1
00:12:06.779  		--rc genhtml_legend=1
00:12:06.779  		--rc geninfo_all_blocks=1
00:12:06.779  		--rc geninfo_unexecuted_blocks=1
00:12:06.779  		
00:12:06.779  		'
00:12:06.779     16:22:35 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:06.779  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.779  		--rc genhtml_branch_coverage=1
00:12:06.779  		--rc genhtml_function_coverage=1
00:12:06.779  		--rc genhtml_legend=1
00:12:06.779  		--rc geninfo_all_blocks=1
00:12:06.779  		--rc geninfo_unexecuted_blocks=1
00:12:06.779  		
00:12:06.779  		'
00:12:06.779     16:22:35 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:06.779  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.779  		--rc genhtml_branch_coverage=1
00:12:06.779  		--rc genhtml_function_coverage=1
00:12:06.779  		--rc genhtml_legend=1
00:12:06.779  		--rc geninfo_all_blocks=1
00:12:06.779  		--rc geninfo_unexecuted_blocks=1
00:12:06.779  		
00:12:06.779  		'
00:12:06.779     16:22:35 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:06.779  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.779  		--rc genhtml_branch_coverage=1
00:12:06.779  		--rc genhtml_function_coverage=1
00:12:06.779  		--rc genhtml_legend=1
00:12:06.779  		--rc geninfo_all_blocks=1
00:12:06.779  		--rc geninfo_unexecuted_blocks=1
00:12:06.779  		
00:12:06.779  		'
00:12:06.779    16:22:35 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:12:06.779       16:22:35 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:12:06.779      16:22:35 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:12:06.779     16:22:35 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:12:06.779     16:22:35 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:06.779      16:22:35 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:06.779       16:22:35 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:06.779       16:22:35 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:06.779       16:22:35 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:06.779       16:22:35 nvme_scc -- paths/export.sh@5 -- # export PATH
00:12:06.779       16:22:35 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:06.779     16:22:35 nvme_scc -- nvme/functions.sh@10 -- # ctrls=()
00:12:06.779     16:22:35 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls
00:12:06.779     16:22:35 nvme_scc -- nvme/functions.sh@11 -- # nvmes=()
00:12:06.779     16:22:35 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes
00:12:06.779     16:22:35 nvme_scc -- nvme/functions.sh@12 -- # bdfs=()
00:12:06.779     16:22:35 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs
00:12:06.779     16:22:35 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=()
00:12:06.779     16:22:35 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:12:06.779     16:22:35 nvme_scc -- nvme/functions.sh@14 -- # nvme_name=
00:12:06.779    16:22:35 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:06.779    16:22:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname
00:12:06.779   16:22:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]]
00:12:06.779   16:22:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]]
00:12:06.779   16:22:35 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:12:07.348  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:12:07.607  Waiting for block devices as requested
00:12:07.607  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:12:07.866  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:12:07.866  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:12:07.866  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:12:13.146  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:12:13.146   16:22:42 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0
00:12:13.146   16:22:42 nvme_scc -- scripts/common.sh@18 -- # local i
00:12:13.146   16:22:42 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:12:13.146   16:22:42 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:13.146   16:22:42 nvme_scc -- scripts/common.sh@27 -- # return 0
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:12:13.146    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:12:13.146    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:12:13.146    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:12:13.146   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12341                ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341               "'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341               '
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:12:13.147    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:12:13.147   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:12:13.148    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:13.148   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12341 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()'
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:13.149   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"'
00:12:13.149    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:13.150   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"'
00:12:13.150    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:12:13.151    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.151   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:13.152    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.152   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:13.153    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:13.153    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:13.153    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:13.153    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:13.153    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]]
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0
00:12:13.153   16:22:42 nvme_scc -- scripts/common.sh@18 -- # local i
00:12:13.153   16:22:42 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:12:13.153   16:22:42 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:13.153   16:22:42 nvme_scc -- scripts/common.sh@27 -- # return 0
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()'
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.153    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1
00:12:13.153   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340               "'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340               '
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl                          "'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl                          '
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0   "'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0   '
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"'
00:12:13.417    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.417   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"'
00:12:13.418    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.418   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"'
00:12:13.419    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=-
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]]
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1
00:12:13.419   16:22:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()'
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.420   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"'
00:12:13.420    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()'
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"'
00:12:13.421    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.421   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:13.422    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:13.422   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0
00:12:13.423   16:22:42 nvme_scc -- scripts/common.sh@18 -- # local i
00:12:13.423   16:22:42 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:12.0  ]]
00:12:13.423   16:22:42 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:13.423   16:22:42 nvme_scc -- scripts/common.sh@27 -- # return 0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()'
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12342                ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342               "'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342               '
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl                          "'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl                          '
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0   "'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0   '
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:13.423   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"'
00:12:13.423    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"'
00:12:13.424    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0
00:12:13.424   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12342 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.425   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"'
00:12:13.425    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0
00:12:13.691   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.691   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.691   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=-
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()'
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"'
00:12:13.692    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.692   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()'
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.693   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"'
00:12:13.693    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:13.694    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.694   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()'
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"'
00:12:13.695    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.695   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()'
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"'
00:12:13.696    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14
00:12:13.696   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"'
00:12:13.697    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.697   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()'
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"'
00:12:13.698    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.698   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()'
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.699   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"'
00:12:13.699    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:13.700    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:13.700   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0
00:12:13.701   16:22:42 nvme_scc -- scripts/common.sh@18 -- # local i
00:12:13.701   16:22:42 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:13.0  ]]
00:12:13.701   16:22:42 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:13.701   16:22:42 nvme_scc -- scripts/common.sh@27 -- # return 0
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@18 -- # shift
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()'
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12343                ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343               "'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343               '
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl                          "'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl                          '
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0   "'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0   '
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x2 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"'
00:12:13.701    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x88010 ]]
00:12:13.701   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"'
00:12:13.702    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0
00:12:13.702   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:fdp-subsys3 ]]
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"'
00:12:13.703    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.703   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"'
00:12:13.704    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"'
00:12:13.704    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"'
00:12:13.704    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"'
00:12:13.704    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"'
00:12:13.704    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"'
00:12:13.704    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:13.704    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:13.704    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"'
00:12:13.704    16:22:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=-
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3
00:12:13.704   16:22:42 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 ))
00:12:13.704    16:22:42 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc
00:12:13.704    16:22:42 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc
00:12:13.704    16:22:42 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 ))
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]]
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]]
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:12:13.704      16:22:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3
00:12:13.704     16:22:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]]
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:12:13.964     16:22:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:12:13.964     16:22:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:12:13.964     16:22:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3
00:12:13.964     16:22:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:13.964     16:22:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2
00:12:13.964     16:22:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]]
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:12:13.964      16:22:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:12:13.964     16:22:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:12:13.964     16:22:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:12:13.964     16:22:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2
00:12:13.964    16:22:42 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 ))
00:12:13.964    16:22:42 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1
00:12:13.964    16:22:42 nvme_scc -- nvme/functions.sh@209 -- # return 0
00:12:13.964   16:22:42 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1
00:12:13.964   16:22:42 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0
00:12:13.964   16:22:42 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:12:14.533  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:12:15.471  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:12:15.471  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:12:15.471  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:12:15.471  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:12:15.471   16:22:44 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:12:15.471   16:22:44 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:12:15.471   16:22:44 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:15.471   16:22:44 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:12:15.471  ************************************
00:12:15.472  START TEST nvme_simple_copy
00:12:15.472  ************************************
00:12:15.472   16:22:44 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:12:15.731  Initializing NVMe Controllers
00:12:15.731  Attaching to 0000:00:10.0
00:12:15.731  Controller supports SCC. Attached to 0000:00:10.0
00:12:15.731    Namespace ID: 1 size: 6GB
00:12:15.731  Initialization complete.
00:12:15.731  
00:12:15.731  Controller QEMU NVMe Ctrl       (12340               )
00:12:15.731  Controller PCI vendor:6966 PCI subsystem vendor:6900
00:12:15.731  Namespace Block Size:4096
00:12:15.731  Writing LBAs 0 to 63 with Random Data
00:12:15.731  Copied LBAs from 0 - 63 to the Destination LBA 256
00:12:15.731  LBAs matching Written Data: 64
00:12:15.990  
00:12:15.990  real	0m0.308s
00:12:15.990  user	0m0.116s
00:12:15.991  sys	0m0.090s
00:12:15.991  ************************************
00:12:15.991  END TEST nvme_simple_copy
00:12:15.991  ************************************
00:12:15.991   16:22:44 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:15.991   16:22:44 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x
00:12:15.991  ************************************
00:12:15.991  END TEST nvme_scc
00:12:15.991  ************************************
00:12:15.991  
00:12:15.991  real	0m9.478s
00:12:15.991  user	0m1.808s
00:12:15.991  sys	0m2.645s
00:12:15.991   16:22:44 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:15.991   16:22:44 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:12:15.991   16:22:45  -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]]
00:12:15.991   16:22:45  -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]]
00:12:15.991   16:22:45  -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]]
00:12:15.991   16:22:45  -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]]
00:12:15.991   16:22:45  -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh
00:12:15.991   16:22:45  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:15.991   16:22:45  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:15.991   16:22:45  -- common/autotest_common.sh@10 -- # set +x
00:12:15.991  ************************************
00:12:15.991  START TEST nvme_fdp
00:12:15.991  ************************************
00:12:15.991   16:22:45 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh
00:12:16.250  * Looking for test storage...
00:12:16.250  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:12:16.250     16:22:45 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:16.250      16:22:45 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version
00:12:16.250      16:22:45 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:16.250     16:22:45 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-:
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-:
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<'
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@345 -- # : 1
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@365 -- # decimal 1
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@353 -- # local d=1
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@355 -- # echo 1
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@366 -- # decimal 2
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@353 -- # local d=2
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@355 -- # echo 2
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:16.250     16:22:45 nvme_fdp -- scripts/common.sh@368 -- # return 0
00:12:16.250     16:22:45 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:16.250     16:22:45 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:16.250  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:16.250  		--rc genhtml_branch_coverage=1
00:12:16.250  		--rc genhtml_function_coverage=1
00:12:16.250  		--rc genhtml_legend=1
00:12:16.250  		--rc geninfo_all_blocks=1
00:12:16.250  		--rc geninfo_unexecuted_blocks=1
00:12:16.250  		
00:12:16.250  		'
00:12:16.250     16:22:45 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:16.250  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:16.250  		--rc genhtml_branch_coverage=1
00:12:16.250  		--rc genhtml_function_coverage=1
00:12:16.250  		--rc genhtml_legend=1
00:12:16.250  		--rc geninfo_all_blocks=1
00:12:16.250  		--rc geninfo_unexecuted_blocks=1
00:12:16.250  		
00:12:16.250  		'
00:12:16.250     16:22:45 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:16.250  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:16.250  		--rc genhtml_branch_coverage=1
00:12:16.250  		--rc genhtml_function_coverage=1
00:12:16.250  		--rc genhtml_legend=1
00:12:16.250  		--rc geninfo_all_blocks=1
00:12:16.250  		--rc geninfo_unexecuted_blocks=1
00:12:16.250  		
00:12:16.250  		'
00:12:16.250     16:22:45 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:16.250  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:16.250  		--rc genhtml_branch_coverage=1
00:12:16.250  		--rc genhtml_function_coverage=1
00:12:16.250  		--rc genhtml_legend=1
00:12:16.250  		--rc geninfo_all_blocks=1
00:12:16.250  		--rc geninfo_unexecuted_blocks=1
00:12:16.250  		
00:12:16.250  		'
00:12:16.250    16:22:45 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:12:16.250       16:22:45 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:12:16.250      16:22:45 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:12:16.250     16:22:45 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:12:16.250     16:22:45 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:16.250      16:22:45 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:16.250       16:22:45 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:16.250       16:22:45 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:16.250       16:22:45 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:16.250       16:22:45 nvme_fdp -- paths/export.sh@5 -- # export PATH
00:12:16.250       16:22:45 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:16.250     16:22:45 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=()
00:12:16.250     16:22:45 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls
00:12:16.250     16:22:45 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=()
00:12:16.250     16:22:45 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes
00:12:16.250     16:22:45 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=()
00:12:16.250     16:22:45 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs
00:12:16.250     16:22:45 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=()
00:12:16.250     16:22:45 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:12:16.251     16:22:45 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name=
00:12:16.251    16:22:45 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:16.251   16:22:45 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:12:16.819  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:12:17.079  Waiting for block devices as requested
00:12:17.079  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:12:17.356  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:12:17.356  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:12:17.616  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:12:22.904  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:12:22.904   16:22:51 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0
00:12:22.904   16:22:51 nvme_fdp -- scripts/common.sh@18 -- # local i
00:12:22.904   16:22:51 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:12:22.904   16:22:51 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:22.904   16:22:51 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12341                ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341               "'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341               '
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:12:22.904    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.904   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:12:22.905    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.905   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:12:22.906    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.906   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12341 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()'
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.907   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"'
00:12:22.907    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.908    16:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:22.908   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:12:22.909    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.909   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0
00:12:22.910   16:22:51 nvme_fdp -- scripts/common.sh@18 -- # local i
00:12:22.910   16:22:51 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:12:22.910   16:22:51 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:22.910   16:22:51 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()'
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340               "'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340               '
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl                          "'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl                          '
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0   "'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0   '
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:22.910   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"'
00:12:22.910    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"'
00:12:22.911    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0
00:12:22.911   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"'
00:12:22.912    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.912   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=-
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()'
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"'
00:12:22.913    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.913   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:22.914    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.914   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()'
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"'
00:12:22.915    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.915   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0
00:12:22.916   16:22:51 nvme_fdp -- scripts/common.sh@18 -- # local i
00:12:22.916   16:22:51 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:12.0  ]]
00:12:22.916   16:22:51 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:22.916   16:22:51 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()'
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12342                ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342               "'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342               '
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl                          "'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl                          '
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0   "'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0   '
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"'
00:12:22.916    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.916   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"'
00:12:22.917    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.917   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"'
00:12:22.918    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.918   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"'
00:12:22.919    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"'
00:12:22.919    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"'
00:12:22.919    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.919   16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"'
00:12:22.919    16:22:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12342 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=-
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()'
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"'
00:12:22.919    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.919   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:22.920    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:22.920   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()'
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"'
00:12:22.921    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.921   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:22.922    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:22.922   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()'
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"'
00:12:23.187    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.187   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:23.188    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:23.188   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"'
00:12:23.189    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127
00:12:23.189   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()'
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"'
00:12:23.190    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.190   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"'
00:12:23.191    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.191   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()'
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"'
00:12:23.192    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.192   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:23.193    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.193   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0
00:12:23.194   16:22:52 nvme_fdp -- scripts/common.sh@18 -- # local i
00:12:23.194   16:22:52 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:13.0  ]]
00:12:23.194   16:22:52 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:23.194   16:22:52 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()'
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12343                ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343               "'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343               '
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl                          "'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl                          '
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0   "'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0   '
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x2 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x88010 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"'
00:12:23.194    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.194   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.195   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"'
00:12:23.195    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"'
00:12:23.196    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.196   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:fdp-subsys3 ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"'
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=-
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3
00:12:23.197   16:22:52 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 ))
00:12:23.197    16:22:52 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp
00:12:23.197    16:22:52 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 ))
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]]
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]]
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]]
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]]
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]]
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]]
00:12:23.197      16:22:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2
00:12:23.197     16:22:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt
00:12:23.457      16:22:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2
00:12:23.457      16:22:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2
00:12:23.457      16:22:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt
00:12:23.457      16:22:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt
00:12:23.457      16:22:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]]
00:12:23.457      16:22:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2
00:12:23.457      16:22:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]]
00:12:23.457      16:22:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000
00:12:23.457     16:22:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000
00:12:23.457     16:22:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:12:23.457    16:22:52 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 ))
00:12:23.457    16:22:52 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3
00:12:23.457    16:22:52 nvme_fdp -- nvme/functions.sh@209 -- # return 0
00:12:23.457   16:22:52 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3
00:12:23.457   16:22:52 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0
00:12:23.457   16:22:52 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:12:24.026  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:12:24.965  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:12:24.965  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:12:24.965  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:12:24.965  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:12:24.965   16:22:54 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0'
00:12:24.965   16:22:54 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:12:24.965   16:22:54 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:24.965   16:22:54 nvme_fdp -- common/autotest_common.sh@10 -- # set +x
00:12:24.965  ************************************
00:12:24.965  START TEST nvme_flexible_data_placement
00:12:24.965  ************************************
00:12:24.965   16:22:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0'
00:12:25.225  Initializing NVMe Controllers
00:12:25.225  Attaching to 0000:00:13.0
00:12:25.225  Controller supports FDP Attached to 0000:00:13.0
00:12:25.225  Namespace ID: 1 Endurance Group ID: 1
00:12:25.225  Initialization complete.
00:12:25.225  
00:12:25.225  ==================================
00:12:25.225  == FDP tests for Namespace: #01 ==
00:12:25.225  ==================================
00:12:25.225  
00:12:25.225  Get Feature: FDP:
00:12:25.225  =================
00:12:25.225    Enabled:                 Yes
00:12:25.225    FDP configuration Index: 0
00:12:25.225  
00:12:25.225  FDP configurations log page
00:12:25.225  ===========================
00:12:25.225  Number of FDP configurations:         1
00:12:25.225  Version:                              0
00:12:25.225  Size:                                 112
00:12:25.225  FDP Configuration Descriptor:         0
00:12:25.225    Descriptor Size:                    96
00:12:25.225    Reclaim Group Identifier format:    2
00:12:25.225    FDP Volatile Write Cache:           Not Present
00:12:25.225    FDP Configuration:                  Valid
00:12:25.225    Vendor Specific Size:               0
00:12:25.225    Number of Reclaim Groups:           2
00:12:25.225    Number of Recalim Unit Handles:     8
00:12:25.225    Max Placement Identifiers:          128
00:12:25.225    Number of Namespaces Suppprted:     256
00:12:25.225    Reclaim unit Nominal Size:          6000000 bytes
00:12:25.225    Estimated Reclaim Unit Time Limit:  Not Reported
00:12:25.225      RUH Desc #000:          RUH Type: Initially Isolated
00:12:25.225      RUH Desc #001:          RUH Type: Initially Isolated
00:12:25.225      RUH Desc #002:          RUH Type: Initially Isolated
00:12:25.225      RUH Desc #003:          RUH Type: Initially Isolated
00:12:25.225      RUH Desc #004:          RUH Type: Initially Isolated
00:12:25.225      RUH Desc #005:          RUH Type: Initially Isolated
00:12:25.225      RUH Desc #006:          RUH Type: Initially Isolated
00:12:25.225      RUH Desc #007:          RUH Type: Initially Isolated
00:12:25.225  
00:12:25.225  FDP reclaim unit handle usage log page
00:12:25.225  ======================================
00:12:25.225  Number of Reclaim Unit Handles:       8
00:12:25.225    RUH Usage Desc #000:   RUH Attributes: Controller Specified
00:12:25.225    RUH Usage Desc #001:   RUH Attributes: Unused
00:12:25.225    RUH Usage Desc #002:   RUH Attributes: Unused
00:12:25.225    RUH Usage Desc #003:   RUH Attributes: Unused
00:12:25.225    RUH Usage Desc #004:   RUH Attributes: Unused
00:12:25.225    RUH Usage Desc #005:   RUH Attributes: Unused
00:12:25.225    RUH Usage Desc #006:   RUH Attributes: Unused
00:12:25.225    RUH Usage Desc #007:   RUH Attributes: Unused
00:12:25.225  
00:12:25.225  FDP statistics log page
00:12:25.225  =======================
00:12:25.225  Host bytes with metadata written:  1066602496
00:12:25.225  Media bytes with metadata written: 1066844160
00:12:25.225  Media bytes erased:                0
00:12:25.225  
00:12:25.225  FDP Reclaim unit handle status
00:12:25.225  ==============================
00:12:25.225  Number of RUHS descriptors:   2
00:12:25.225  RUHS Desc: #0000  PID: 0x0000  RUHID: 0x0000  ERUT: 0x00000000  RUAMW: 0x00000000000026cf
00:12:25.225  RUHS Desc: #0001  PID: 0x4000  RUHID: 0x0000  ERUT: 0x00000000  RUAMW: 0x0000000000006000
00:12:25.225  
00:12:25.225  FDP write on placement id: 0 success
00:12:25.225  
00:12:25.225  Set Feature: Enabling FDP events on Placement handle: #0 Success
00:12:25.225  
00:12:25.225  IO mgmt send: RUH update for Placement ID: #0 Success
00:12:25.225  
00:12:25.225  Get Feature: FDP Events for Placement handle: #0
00:12:25.225  ========================
00:12:25.225  Number of FDP Events: 6
00:12:25.225  FDP Event: #0  Type: RU Not Written to Capacity     Enabled: Yes
00:12:25.225  FDP Event: #1  Type: RU Time Limit Exceeded         Enabled: Yes
00:12:25.225  FDP Event: #2  Type: Ctrlr Reset Modified RUH's     Enabled: Yes
00:12:25.225  FDP Event: #3  Type: Invalid Placement Identifier   Enabled: Yes
00:12:25.225  FDP Event: #4  Type: Media Reallocated              Enabled: No
00:12:25.225  FDP Event: #5  Type: Implicitly modified RUH        Enabled: No
00:12:25.225  
00:12:25.225  FDP events log page
00:12:25.225  ===================
00:12:25.225  Number of FDP events: 1
00:12:25.225  FDP Event #0:
00:12:25.225    Event Type:                      RU Not Written to Capacity
00:12:25.225    Placement Identifier:            Valid
00:12:25.225    NSID:                            Valid
00:12:25.225    Location:                        Valid
00:12:25.225    Placement Identifier:            0
00:12:25.225    Event Timestamp:                 8
00:12:25.225    Namespace Identifier:            1
00:12:25.225    Reclaim Group Identifier:        0
00:12:25.225    Reclaim Unit Handle Identifier:  0
00:12:25.225  
00:12:25.225  FDP test passed
00:12:25.225  
00:12:25.225  real	0m0.287s
00:12:25.225  user	0m0.087s
00:12:25.225  sys	0m0.098s
00:12:25.225   16:22:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:25.225  ************************************
00:12:25.225  END TEST nvme_flexible_data_placement
00:12:25.225  ************************************
00:12:25.225   16:22:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x
00:12:25.485  ************************************
00:12:25.485  END TEST nvme_fdp
00:12:25.485  ************************************
00:12:25.485  
00:12:25.485  real	0m9.379s
00:12:25.485  user	0m1.702s
00:12:25.485  sys	0m2.634s
00:12:25.485   16:22:54 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:25.485   16:22:54 nvme_fdp -- common/autotest_common.sh@10 -- # set +x
00:12:25.485   16:22:54  -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]]
00:12:25.485   16:22:54  -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:12:25.485   16:22:54  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:25.485   16:22:54  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:25.485   16:22:54  -- common/autotest_common.sh@10 -- # set +x
00:12:25.485  ************************************
00:12:25.485  START TEST nvme_rpc
00:12:25.485  ************************************
00:12:25.485   16:22:54 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:12:25.485  * Looking for test storage...
00:12:25.485  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:12:25.485    16:22:54 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:25.485     16:22:54 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:12:25.485     16:22:54 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:25.744    16:22:54 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@345 -- # : 1
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:25.744     16:22:54 nvme_rpc -- scripts/common.sh@365 -- # decimal 1
00:12:25.744     16:22:54 nvme_rpc -- scripts/common.sh@353 -- # local d=1
00:12:25.744     16:22:54 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:25.744     16:22:54 nvme_rpc -- scripts/common.sh@355 -- # echo 1
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:12:25.744     16:22:54 nvme_rpc -- scripts/common.sh@366 -- # decimal 2
00:12:25.744     16:22:54 nvme_rpc -- scripts/common.sh@353 -- # local d=2
00:12:25.744     16:22:54 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:25.744     16:22:54 nvme_rpc -- scripts/common.sh@355 -- # echo 2
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:12:25.744    16:22:54 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:25.745    16:22:54 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:25.745    16:22:54 nvme_rpc -- scripts/common.sh@368 -- # return 0
00:12:25.745    16:22:54 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:25.745    16:22:54 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:25.745  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:25.745  		--rc genhtml_branch_coverage=1
00:12:25.745  		--rc genhtml_function_coverage=1
00:12:25.745  		--rc genhtml_legend=1
00:12:25.745  		--rc geninfo_all_blocks=1
00:12:25.745  		--rc geninfo_unexecuted_blocks=1
00:12:25.745  		
00:12:25.745  		'
00:12:25.745    16:22:54 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:25.745  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:25.745  		--rc genhtml_branch_coverage=1
00:12:25.745  		--rc genhtml_function_coverage=1
00:12:25.745  		--rc genhtml_legend=1
00:12:25.745  		--rc geninfo_all_blocks=1
00:12:25.745  		--rc geninfo_unexecuted_blocks=1
00:12:25.745  		
00:12:25.745  		'
00:12:25.745    16:22:54 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:25.745  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:25.745  		--rc genhtml_branch_coverage=1
00:12:25.745  		--rc genhtml_function_coverage=1
00:12:25.745  		--rc genhtml_legend=1
00:12:25.745  		--rc geninfo_all_blocks=1
00:12:25.745  		--rc geninfo_unexecuted_blocks=1
00:12:25.745  		
00:12:25.745  		'
00:12:25.745    16:22:54 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:25.745  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:25.745  		--rc genhtml_branch_coverage=1
00:12:25.745  		--rc genhtml_function_coverage=1
00:12:25.745  		--rc genhtml_legend=1
00:12:25.745  		--rc geninfo_all_blocks=1
00:12:25.745  		--rc geninfo_unexecuted_blocks=1
00:12:25.745  		
00:12:25.745  		'
00:12:25.745   16:22:54 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:25.745    16:22:54 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:12:25.745    16:22:54 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=()
00:12:25.745    16:22:54 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs
00:12:25.745    16:22:54 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:12:25.745     16:22:54 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:12:25.745     16:22:54 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=()
00:12:25.745     16:22:54 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs
00:12:25.745     16:22:54 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:12:25.745      16:22:54 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:12:25.745      16:22:54 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:12:25.745     16:22:54 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:12:25.745     16:22:54 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:12:25.745    16:22:54 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:12:25.745   16:22:54 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0
00:12:25.745   16:22:54 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=68212
00:12:25.745   16:22:54 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:12:25.745   16:22:54 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:12:25.745   16:22:54 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 68212
00:12:25.745   16:22:54 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68212 ']'
00:12:25.745   16:22:54 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:25.745   16:22:54 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:25.745  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:25.745   16:22:54 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:25.745   16:22:54 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:25.745   16:22:54 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:26.004  [2024-12-09 16:22:54.991711] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:12:26.004  [2024-12-09 16:22:54.991832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68212 ]
00:12:26.263  [2024-12-09 16:22:55.183665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:26.263  [2024-12-09 16:22:55.293348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:26.263  [2024-12-09 16:22:55.293384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:27.200   16:22:56 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:27.200   16:22:56 nvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:12:27.200   16:22:56 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0
00:12:27.200  Nvme0n1
00:12:27.459   16:22:56 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:12:27.459   16:22:56 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:12:27.459  request:
00:12:27.459  {
00:12:27.459    "bdev_name": "Nvme0n1",
00:12:27.459    "filename": "non_existing_file",
00:12:27.459    "method": "bdev_nvme_apply_firmware",
00:12:27.459    "req_id": 1
00:12:27.459  }
00:12:27.459  Got JSON-RPC error response
00:12:27.459  response:
00:12:27.459  {
00:12:27.459    "code": -32603,
00:12:27.459    "message": "open file failed."
00:12:27.459  }
00:12:27.459   16:22:56 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1
00:12:27.459   16:22:56 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:12:27.459   16:22:56 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:12:27.719   16:22:56 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:12:27.719   16:22:56 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 68212
00:12:27.719   16:22:56 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68212 ']'
00:12:27.719   16:22:56 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68212
00:12:27.719    16:22:56 nvme_rpc -- common/autotest_common.sh@959 -- # uname
00:12:27.719   16:22:56 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:27.719    16:22:56 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68212
00:12:27.719   16:22:56 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:27.719   16:22:56 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:27.719  killing process with pid 68212
00:12:27.719   16:22:56 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68212'
00:12:27.719   16:22:56 nvme_rpc -- common/autotest_common.sh@973 -- # kill 68212
00:12:27.719   16:22:56 nvme_rpc -- common/autotest_common.sh@978 -- # wait 68212
00:12:30.250  
00:12:30.250  real	0m4.514s
00:12:30.250  user	0m8.112s
00:12:30.250  sys	0m0.805s
00:12:30.250   16:22:59 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:30.250  ************************************
00:12:30.250  END TEST nvme_rpc
00:12:30.250  ************************************
00:12:30.250   16:22:59 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:30.250   16:22:59  -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:12:30.250   16:22:59  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:30.250   16:22:59  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:30.250   16:22:59  -- common/autotest_common.sh@10 -- # set +x
00:12:30.250  ************************************
00:12:30.250  START TEST nvme_rpc_timeouts
00:12:30.250  ************************************
00:12:30.250   16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:12:30.250  * Looking for test storage...
00:12:30.250  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:12:30.250    16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:30.250     16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version
00:12:30.250     16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:30.250    16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-:
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-:
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<'
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:30.250     16:22:59 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1
00:12:30.250     16:22:59 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1
00:12:30.250     16:22:59 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:30.250     16:22:59 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1
00:12:30.250     16:22:59 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2
00:12:30.250     16:22:59 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2
00:12:30.250     16:22:59 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:30.250     16:22:59 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:30.250    16:22:59 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0
00:12:30.250    16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:30.250    16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:30.250  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:30.250  		--rc genhtml_branch_coverage=1
00:12:30.250  		--rc genhtml_function_coverage=1
00:12:30.250  		--rc genhtml_legend=1
00:12:30.250  		--rc geninfo_all_blocks=1
00:12:30.250  		--rc geninfo_unexecuted_blocks=1
00:12:30.250  		
00:12:30.250  		'
00:12:30.250    16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:30.250  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:30.250  		--rc genhtml_branch_coverage=1
00:12:30.250  		--rc genhtml_function_coverage=1
00:12:30.250  		--rc genhtml_legend=1
00:12:30.250  		--rc geninfo_all_blocks=1
00:12:30.250  		--rc geninfo_unexecuted_blocks=1
00:12:30.250  		
00:12:30.250  		'
00:12:30.250    16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:30.250  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:30.250  		--rc genhtml_branch_coverage=1
00:12:30.250  		--rc genhtml_function_coverage=1
00:12:30.250  		--rc genhtml_legend=1
00:12:30.250  		--rc geninfo_all_blocks=1
00:12:30.250  		--rc geninfo_unexecuted_blocks=1
00:12:30.250  		
00:12:30.250  		'
00:12:30.250    16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:30.250  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:30.250  		--rc genhtml_branch_coverage=1
00:12:30.250  		--rc genhtml_function_coverage=1
00:12:30.250  		--rc genhtml_legend=1
00:12:30.250  		--rc geninfo_all_blocks=1
00:12:30.250  		--rc geninfo_unexecuted_blocks=1
00:12:30.250  		
00:12:30.250  		'
00:12:30.250   16:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:30.250   16:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_68287
00:12:30.250   16:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_68287
00:12:30.250   16:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=68320
00:12:30.250   16:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:12:30.250   16:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:12:30.250   16:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 68320
00:12:30.250   16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 68320 ']'
00:12:30.250   16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:30.250   16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:30.250  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:30.250   16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:30.250   16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:30.250   16:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:12:30.510  [2024-12-09 16:22:59.447842] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:12:30.510  [2024-12-09 16:22:59.448473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68320 ]
00:12:30.510  [2024-12-09 16:22:59.626888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:30.769  [2024-12-09 16:22:59.736173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:30.769  [2024-12-09 16:22:59.736208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:31.705   16:23:00 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:31.705   16:23:00 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0
00:12:31.705  Checking default timeout settings:
00:12:31.705   16:23:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:12:31.705   16:23:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:12:31.963  Making settings changes with rpc:
00:12:31.963   16:23:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:12:31.963   16:23:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:12:31.963  Check default vs. modified settings:
00:12:31.963   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:12:31.963   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:12:32.221   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:12:32.221   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:12:32.221    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:32.221    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_68287
00:12:32.221    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_68287
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:12:32.479  Setting action_on_timeout is changed as expected.
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_68287
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_68287
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:12:32.479  Setting timeout_us is changed as expected.
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_68287
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_68287
00:12:32.479    16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:12:32.479  Setting timeout_admin_us is changed as expected.
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_68287 /tmp/settings_modified_68287
00:12:32.479   16:23:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 68320
00:12:32.479   16:23:01 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 68320 ']'
00:12:32.479   16:23:01 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 68320
00:12:32.479    16:23:01 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname
00:12:32.479   16:23:01 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:32.479    16:23:01 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68320
00:12:32.479   16:23:01 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:32.479   16:23:01 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:32.479  killing process with pid 68320
00:12:32.479   16:23:01 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68320'
00:12:32.479   16:23:01 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 68320
00:12:32.479   16:23:01 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 68320
00:12:35.012  RPC TIMEOUT SETTING TEST PASSED.
00:12:35.012   16:23:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:12:35.012  
00:12:35.012  real	0m4.711s
00:12:35.012  user	0m8.757s
00:12:35.012  sys	0m0.803s
00:12:35.012   16:23:03 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:35.012   16:23:03 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:12:35.012  ************************************
00:12:35.012  END TEST nvme_rpc_timeouts
00:12:35.012  ************************************
00:12:35.012    16:23:03  -- spdk/autotest.sh@239 -- # uname -s
00:12:35.012   16:23:03  -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']'
00:12:35.012   16:23:03  -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:12:35.012   16:23:03  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:35.012   16:23:03  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:35.012   16:23:03  -- common/autotest_common.sh@10 -- # set +x
00:12:35.012  ************************************
00:12:35.012  START TEST sw_hotplug
00:12:35.012  ************************************
00:12:35.012   16:23:03 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:12:35.012  * Looking for test storage...
00:12:35.012  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:12:35.012    16:23:04 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:35.012     16:23:04 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version
00:12:35.012     16:23:04 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:35.012    16:23:04 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-:
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-:
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<'
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@345 -- # : 1
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:35.012     16:23:04 sw_hotplug -- scripts/common.sh@365 -- # decimal 1
00:12:35.012     16:23:04 sw_hotplug -- scripts/common.sh@353 -- # local d=1
00:12:35.012     16:23:04 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:35.012     16:23:04 sw_hotplug -- scripts/common.sh@355 -- # echo 1
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1
00:12:35.012     16:23:04 sw_hotplug -- scripts/common.sh@366 -- # decimal 2
00:12:35.012     16:23:04 sw_hotplug -- scripts/common.sh@353 -- # local d=2
00:12:35.012     16:23:04 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:35.012     16:23:04 sw_hotplug -- scripts/common.sh@355 -- # echo 2
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:35.012    16:23:04 sw_hotplug -- scripts/common.sh@368 -- # return 0
00:12:35.012    16:23:04 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:35.012    16:23:04 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:35.012  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:35.012  		--rc genhtml_branch_coverage=1
00:12:35.012  		--rc genhtml_function_coverage=1
00:12:35.012  		--rc genhtml_legend=1
00:12:35.012  		--rc geninfo_all_blocks=1
00:12:35.012  		--rc geninfo_unexecuted_blocks=1
00:12:35.012  		
00:12:35.012  		'
00:12:35.012    16:23:04 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:35.012  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:35.012  		--rc genhtml_branch_coverage=1
00:12:35.012  		--rc genhtml_function_coverage=1
00:12:35.012  		--rc genhtml_legend=1
00:12:35.012  		--rc geninfo_all_blocks=1
00:12:35.012  		--rc geninfo_unexecuted_blocks=1
00:12:35.012  		
00:12:35.012  		'
00:12:35.012    16:23:04 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:35.012  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:35.012  		--rc genhtml_branch_coverage=1
00:12:35.012  		--rc genhtml_function_coverage=1
00:12:35.012  		--rc genhtml_legend=1
00:12:35.012  		--rc geninfo_all_blocks=1
00:12:35.012  		--rc geninfo_unexecuted_blocks=1
00:12:35.012  		
00:12:35.012  		'
00:12:35.012    16:23:04 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:35.012  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:35.012  		--rc genhtml_branch_coverage=1
00:12:35.012  		--rc genhtml_function_coverage=1
00:12:35.012  		--rc genhtml_legend=1
00:12:35.012  		--rc geninfo_all_blocks=1
00:12:35.012  		--rc geninfo_unexecuted_blocks=1
00:12:35.012  		
00:12:35.012  		'
00:12:35.012   16:23:04 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:12:35.582  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:12:35.841  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:12:35.841  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:12:35.841  0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver
00:12:35.841  0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver
00:12:35.841   16:23:05 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6
00:12:35.841   16:23:05 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3
00:12:35.841   16:23:05 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace))
00:12:36.100    16:23:05 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@313 -- # local nvmes
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]]
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@298 -- # local bdf=
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@233 -- # local class
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@234 -- # local subclass
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@235 -- # local progif
00:12:36.100       16:23:05 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@236 -- # class=01
00:12:36.100       16:23:05 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@237 -- # subclass=08
00:12:36.100       16:23:05 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@238 -- # progif=02
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@240 -- # hash lspci
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']'
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D
00:12:36.100      16:23:05 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"'
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@18 -- # local i
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@18 -- # local i
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@18 -- # local i
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:12.0  ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@18 -- # local i
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:13.0  ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]]
00:12:36.100     16:23:05 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@328 -- # (( 4 ))
00:12:36.100    16:23:05 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:12:36.100   16:23:05 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2
00:12:36.100   16:23:05 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}")
00:12:36.100   16:23:05 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:12:36.669  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:12:36.929  Waiting for block devices as requested
00:12:36.929  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:12:37.188  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:12:37.188  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:12:37.448  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:12:42.726  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:12:42.726   16:23:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0'
00:12:42.726   16:23:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:12:42.986  0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0
00:12:43.246  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:12:43.246  0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0
00:12:43.505  0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0
00:12:43.765  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:12:43.765  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:12:44.024   16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable
00:12:44.024   16:23:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:12:44.024   16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug
00:12:44.024   16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT
00:12:44.024   16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=69200
00:12:44.024   16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false
00:12:44.024   16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:12:44.024   16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning
00:12:44.024    16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false
00:12:44.024    16:23:13 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:12:44.024    16:23:13 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:12:44.024    16:23:13 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:12:44.025    16:23:13 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:12:44.025     16:23:13 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false
00:12:44.025     16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:12:44.025     16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:12:44.025     16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false
00:12:44.025     16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:12:44.025     16:23:13 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:12:44.284  Initializing NVMe Controllers
00:12:44.284  Attaching to 0000:00:10.0
00:12:44.284  Attaching to 0000:00:11.0
00:12:44.284  Attached to 0000:00:10.0
00:12:44.284  Attached to 0000:00:11.0
00:12:44.284  Initialization complete. Starting I/O...
00:12:44.284  QEMU NVMe Ctrl       (12340               ):          0 I/Os completed (+0)
00:12:44.284  QEMU NVMe Ctrl       (12341               ):          0 I/Os completed (+0)
00:12:44.284  
00:12:45.662  QEMU NVMe Ctrl       (12340               ):       1496 I/Os completed (+1496)
00:12:45.662  QEMU NVMe Ctrl       (12341               ):       1497 I/Os completed (+1497)
00:12:45.662  
00:12:46.600  QEMU NVMe Ctrl       (12340               ):       3492 I/Os completed (+1996)
00:12:46.600  QEMU NVMe Ctrl       (12341               ):       3496 I/Os completed (+1999)
00:12:46.600  
00:12:47.537  QEMU NVMe Ctrl       (12340               ):       5608 I/Os completed (+2116)
00:12:47.537  QEMU NVMe Ctrl       (12341               ):       5612 I/Os completed (+2116)
00:12:47.537  
00:12:48.475  QEMU NVMe Ctrl       (12340               ):       7692 I/Os completed (+2084)
00:12:48.475  QEMU NVMe Ctrl       (12341               ):       7696 I/Os completed (+2084)
00:12:48.475  
00:12:49.412  QEMU NVMe Ctrl       (12340               ):       9820 I/Os completed (+2128)
00:12:49.412  QEMU NVMe Ctrl       (12341               ):       9824 I/Os completed (+2128)
00:12:49.412  
00:12:50.356     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:12:50.357  [2024-12-09 16:23:19.177364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:12:50.357  Controller removed: QEMU NVMe Ctrl       (12340               )
00:12:50.357  [2024-12-09 16:23:19.180059] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.180128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.180165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.180205] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:12:50.357  [2024-12-09 16:23:19.184244] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.184313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.184342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.184371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:12:50.357  [2024-12-09 16:23:19.211143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:12:50.357  Controller removed: QEMU NVMe Ctrl       (12341               )
00:12:50.357  [2024-12-09 16:23:19.213616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.213681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.213720] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.213753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:12:50.357  [2024-12-09 16:23:19.217243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.217303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.217335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  [2024-12-09 16:23:19.217362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:50.357  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor
00:12:50.357  EAL: Scan for (pci) bus failed.
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:12:50.357  
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:12:50.357     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:12:50.357  Attaching to 0000:00:10.0
00:12:50.357  Attached to 0000:00:10.0
00:12:50.634     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:12:50.634     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:12:50.634     16:23:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:12:50.634  Attaching to 0000:00:11.0
00:12:50.634  Attached to 0000:00:11.0
00:12:51.582  QEMU NVMe Ctrl       (12340               ):       2080 I/Os completed (+2080)
00:12:51.582  QEMU NVMe Ctrl       (12341               ):       1824 I/Os completed (+1824)
00:12:51.582  
00:12:52.519  QEMU NVMe Ctrl       (12340               ):       4320 I/Os completed (+2240)
00:12:52.519  QEMU NVMe Ctrl       (12341               ):       4064 I/Os completed (+2240)
00:12:52.519  
00:12:53.456  QEMU NVMe Ctrl       (12340               ):       6556 I/Os completed (+2236)
00:12:53.456  QEMU NVMe Ctrl       (12341               ):       6300 I/Os completed (+2236)
00:12:53.456  
00:12:54.393  QEMU NVMe Ctrl       (12340               ):       8800 I/Os completed (+2244)
00:12:54.393  QEMU NVMe Ctrl       (12341               ):       8544 I/Os completed (+2244)
00:12:54.393  
00:12:55.330  QEMU NVMe Ctrl       (12340               ):      11064 I/Os completed (+2264)
00:12:55.330  QEMU NVMe Ctrl       (12341               ):      10808 I/Os completed (+2264)
00:12:55.330  
00:12:56.267  QEMU NVMe Ctrl       (12340               ):      13324 I/Os completed (+2260)
00:12:56.267  QEMU NVMe Ctrl       (12341               ):      13068 I/Os completed (+2260)
00:12:56.267  
00:12:57.644  QEMU NVMe Ctrl       (12340               ):      15596 I/Os completed (+2272)
00:12:57.644  QEMU NVMe Ctrl       (12341               ):      15341 I/Os completed (+2273)
00:12:57.644  
00:12:58.582  QEMU NVMe Ctrl       (12340               ):      17852 I/Os completed (+2256)
00:12:58.582  QEMU NVMe Ctrl       (12341               ):      17597 I/Os completed (+2256)
00:12:58.582  
00:12:59.519  QEMU NVMe Ctrl       (12340               ):      20124 I/Os completed (+2272)
00:12:59.519  QEMU NVMe Ctrl       (12341               ):      19869 I/Os completed (+2272)
00:12:59.519  
00:13:00.456  QEMU NVMe Ctrl       (12340               ):      22396 I/Os completed (+2272)
00:13:00.456  QEMU NVMe Ctrl       (12341               ):      22141 I/Os completed (+2272)
00:13:00.456  
00:13:01.394  QEMU NVMe Ctrl       (12340               ):      24640 I/Os completed (+2244)
00:13:01.394  QEMU NVMe Ctrl       (12341               ):      24385 I/Os completed (+2244)
00:13:01.394  
00:13:02.332  QEMU NVMe Ctrl       (12340               ):      26900 I/Os completed (+2260)
00:13:02.332  QEMU NVMe Ctrl       (12341               ):      26645 I/Os completed (+2260)
00:13:02.332  
00:13:02.592     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:13:02.592     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:13:02.592     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:02.592     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:02.592  [2024-12-09 16:23:31.588265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:13:02.592  Controller removed: QEMU NVMe Ctrl       (12340               )
00:13:02.592  [2024-12-09 16:23:31.590092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.590268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.590319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.590517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:13:02.592  [2024-12-09 16:23:31.593466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.593611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.593661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.593776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:02.592     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:02.592  [2024-12-09 16:23:31.629825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:13:02.592  Controller removed: QEMU NVMe Ctrl       (12341               )
00:13:02.592  [2024-12-09 16:23:31.631422] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.631512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.631564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.631586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:13:02.592  [2024-12-09 16:23:31.634154] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.634199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.634219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592  [2024-12-09 16:23:31.634239] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:02.592     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:13:02.592     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:13:02.592  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor
00:13:02.592  EAL: Scan for (pci) bus failed.
00:13:02.592     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:02.592     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:02.592     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:13:02.851     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:13:02.851     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:02.851     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:02.851     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:02.851     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:13:02.851  Attaching to 0000:00:10.0
00:13:02.851  Attached to 0000:00:10.0
00:13:02.851     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:13:02.851     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:02.851     16:23:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:13:02.851  Attaching to 0000:00:11.0
00:13:02.851  Attached to 0000:00:11.0
00:13:03.420  QEMU NVMe Ctrl       (12340               ):       1192 I/Os completed (+1192)
00:13:03.420  QEMU NVMe Ctrl       (12341               ):        944 I/Os completed (+944)
00:13:03.420  
00:13:04.358  QEMU NVMe Ctrl       (12340               ):       3444 I/Os completed (+2252)
00:13:04.358  QEMU NVMe Ctrl       (12341               ):       3197 I/Os completed (+2253)
00:13:04.358  
00:13:05.295  QEMU NVMe Ctrl       (12340               ):       5680 I/Os completed (+2236)
00:13:05.295  QEMU NVMe Ctrl       (12341               ):       5433 I/Os completed (+2236)
00:13:05.295  
00:13:06.232  QEMU NVMe Ctrl       (12340               ):       7932 I/Os completed (+2252)
00:13:06.232  QEMU NVMe Ctrl       (12341               ):       7685 I/Os completed (+2252)
00:13:06.232  
00:13:07.610  QEMU NVMe Ctrl       (12340               ):      10180 I/Os completed (+2248)
00:13:07.610  QEMU NVMe Ctrl       (12341               ):       9933 I/Os completed (+2248)
00:13:07.610  
00:13:08.546  QEMU NVMe Ctrl       (12340               ):      12452 I/Os completed (+2272)
00:13:08.546  QEMU NVMe Ctrl       (12341               ):      12205 I/Os completed (+2272)
00:13:08.546  
00:13:09.484  QEMU NVMe Ctrl       (12340               ):      14708 I/Os completed (+2256)
00:13:09.484  QEMU NVMe Ctrl       (12341               ):      14463 I/Os completed (+2258)
00:13:09.484  
00:13:10.422  QEMU NVMe Ctrl       (12340               ):      16956 I/Os completed (+2248)
00:13:10.422  QEMU NVMe Ctrl       (12341               ):      16711 I/Os completed (+2248)
00:13:10.422  
00:13:11.359  QEMU NVMe Ctrl       (12340               ):      19216 I/Os completed (+2260)
00:13:11.359  QEMU NVMe Ctrl       (12341               ):      18971 I/Os completed (+2260)
00:13:11.359  
00:13:12.297  QEMU NVMe Ctrl       (12340               ):      21460 I/Os completed (+2244)
00:13:12.297  QEMU NVMe Ctrl       (12341               ):      21215 I/Os completed (+2244)
00:13:12.297  
00:13:13.235  QEMU NVMe Ctrl       (12340               ):      23724 I/Os completed (+2264)
00:13:13.235  QEMU NVMe Ctrl       (12341               ):      23479 I/Os completed (+2264)
00:13:13.235  
00:13:14.614  QEMU NVMe Ctrl       (12340               ):      25976 I/Os completed (+2252)
00:13:14.614  QEMU NVMe Ctrl       (12341               ):      25731 I/Os completed (+2252)
00:13:14.614  
00:13:14.874     16:23:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:13:14.874     16:23:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:13:14.874     16:23:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:14.874     16:23:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:14.874  [2024-12-09 16:23:43.968032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:13:14.874  Controller removed: QEMU NVMe Ctrl       (12340               )
00:13:14.874  [2024-12-09 16:23:43.969821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:43.969997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:43.970052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:43.970145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:13:14.874  [2024-12-09 16:23:43.973100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:43.973185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:43.973231] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:43.973274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:14.874     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:14.874  [2024-12-09 16:23:44.006575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:13:14.874  Controller removed: QEMU NVMe Ctrl       (12341               )
00:13:14.874  [2024-12-09 16:23:44.008273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:44.008566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:44.008624] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:44.008757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:13:14.874  [2024-12-09 16:23:44.011410] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:44.011576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:44.011636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874  [2024-12-09 16:23:44.011678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:14.874     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:13:14.874     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:13:14.874  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor
00:13:14.874  EAL: Scan for (pci) bus failed.
00:13:15.134     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:15.134     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:15.134     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:13:15.134     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:13:15.134     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:15.134     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:15.134     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:15.134     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:13:15.134  Attaching to 0000:00:10.0
00:13:15.134  Attached to 0000:00:10.0
00:13:15.393     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:13:15.393     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:15.393     16:23:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:13:15.393  Attaching to 0000:00:11.0
00:13:15.393  Attached to 0000:00:11.0
00:13:15.393  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:13:15.393  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:13:15.393  [2024-12-09 16:23:44.348083] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09
00:13:27.653     16:23:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:13:27.653     16:23:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:13:27.653    16:23:56 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.17
00:13:27.653    16:23:56 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.17
00:13:27.653    16:23:56 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:13:27.653   16:23:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.17
00:13:27.653   16:23:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.17 2
00:13:27.653  remove_attach_helper took 43.17s to complete (handling 2 nvme drive(s)) 16:23:56 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6
00:13:34.227   16:24:02 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 69200
00:13:34.227  /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (69200) - No such process
00:13:34.227   16:24:02 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 69200
00:13:34.227   16:24:02 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT
00:13:34.227   16:24:02 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug
00:13:34.227   16:24:02 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev
00:13:34.227   16:24:02 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69752
00:13:34.227   16:24:02 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:13:34.227   16:24:02 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT
00:13:34.227   16:24:02 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69752
00:13:34.227   16:24:02 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69752 ']'
00:13:34.227   16:24:02 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:34.227   16:24:02 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:34.227   16:24:02 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:34.227  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:34.227   16:24:02 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:34.227   16:24:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:34.227  [2024-12-09 16:24:02.468032] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:13:34.227  [2024-12-09 16:24:02.468650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69752 ]
00:13:34.227  [2024-12-09 16:24:02.673109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:34.227  [2024-12-09 16:24:02.778526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:34.487   16:24:03 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:34.487   16:24:03 sw_hotplug -- common/autotest_common.sh@868 -- # return 0
00:13:34.487   16:24:03 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:13:34.487   16:24:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:34.487   16:24:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:34.487   16:24:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:34.487   16:24:03 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true
00:13:34.487   16:24:03 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:13:34.487    16:24:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:13:34.487    16:24:03 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:13:34.487    16:24:03 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:13:34.487    16:24:03 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:13:34.487    16:24:03 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:13:34.487     16:24:03 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:13:34.487     16:24:03 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:13:34.487     16:24:03 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:13:34.487     16:24:03 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:13:34.487     16:24:03 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:13:34.487     16:24:03 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:13:41.058     16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:13:41.058     16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:41.058     16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:41.058     16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:41.058     16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:41.058     16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:13:41.058     16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:13:41.058      16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:13:41.058      16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:41.058      16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:41.058       16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:41.058       16:24:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:41.058       16:24:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:41.058  [2024-12-09 16:24:09.702267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:13:41.058  [2024-12-09 16:24:09.704604] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:41.058  [2024-12-09 16:24:09.704745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:41.058  [2024-12-09 16:24:09.704816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:41.058  [2024-12-09 16:24:09.704879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:41.058  [2024-12-09 16:24:09.704925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:41.058  [2024-12-09 16:24:09.705131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:41.058  [2024-12-09 16:24:09.705153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:41.058  [2024-12-09 16:24:09.705169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:41.058  [2024-12-09 16:24:09.705181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:41.058  [2024-12-09 16:24:09.705199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:41.058  [2024-12-09 16:24:09.705210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:41.058  [2024-12-09 16:24:09.705225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:41.058       16:24:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:41.058     16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:13:41.058     16:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:13:41.058  [2024-12-09 16:24:10.101602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:13:41.058  [2024-12-09 16:24:10.104027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:41.058  [2024-12-09 16:24:10.104073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:41.058  [2024-12-09 16:24:10.104090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:41.058  [2024-12-09 16:24:10.104107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:41.058  [2024-12-09 16:24:10.104122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:41.058  [2024-12-09 16:24:10.104134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:41.058  [2024-12-09 16:24:10.104149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:41.059  [2024-12-09 16:24:10.104160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:41.059  [2024-12-09 16:24:10.104174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:41.059  [2024-12-09 16:24:10.104186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:41.059  [2024-12-09 16:24:10.104200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:41.059  [2024-12-09 16:24:10.104211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:13:41.318      16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:13:41.318      16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:41.318      16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:41.318       16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:41.318       16:24:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:41.318       16:24:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:41.318       16:24:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:41.318     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:13:41.577     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:13:41.577     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:41.577     16:24:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:13:53.793      16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:13:53.793      16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:53.793      16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:53.793       16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:53.793       16:24:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:53.793       16:24:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:53.793       16:24:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:53.793  [2024-12-09 16:24:22.681369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:13:53.793  [2024-12-09 16:24:22.684110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:53.793  [2024-12-09 16:24:22.684265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:53.793  [2024-12-09 16:24:22.684409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:53.793  [2024-12-09 16:24:22.684500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:53.793  [2024-12-09 16:24:22.684592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:53.793  [2024-12-09 16:24:22.684653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:53.793  [2024-12-09 16:24:22.684705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:53.793  [2024-12-09 16:24:22.684739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:53.793  [2024-12-09 16:24:22.684843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:53.793  [2024-12-09 16:24:22.684914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:53.793  [2024-12-09 16:24:22.684951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:53.793  [2024-12-09 16:24:22.685015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:13:53.793      16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:13:53.793      16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:53.793      16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:53.793       16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:53.793       16:24:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:53.793       16:24:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:53.793       16:24:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:13:53.793     16:24:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:13:54.053  [2024-12-09 16:24:23.080691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:13:54.053  [2024-12-09 16:24:23.082890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:54.053  [2024-12-09 16:24:23.082958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:54.053  [2024-12-09 16:24:23.082979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:54.053  [2024-12-09 16:24:23.082999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:54.053  [2024-12-09 16:24:23.083013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:54.053  [2024-12-09 16:24:23.083025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:54.053  [2024-12-09 16:24:23.083040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:54.053  [2024-12-09 16:24:23.083051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:54.053  [2024-12-09 16:24:23.083065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:54.053  [2024-12-09 16:24:23.083077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:54.053  [2024-12-09 16:24:23.083101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:54.053  [2024-12-09 16:24:23.083112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:54.312     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:13:54.312     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:13:54.312      16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:13:54.312      16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:54.312      16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:54.312       16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:54.312       16:24:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:54.312       16:24:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:54.312       16:24:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:54.312     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:13:54.312     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:13:54.312     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:54.312     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:54.312     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:13:54.312     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:13:54.572     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:54.572     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:54.572     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:54.572     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:13:54.572     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:13:54.572     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:54.572     16:24:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:14:06.786      16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:14:06.786      16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:06.786      16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:06.786       16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:06.786       16:24:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:06.786       16:24:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:06.786       16:24:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:06.786      16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:06.786      16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:06.786      16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:06.786       16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:06.786       16:24:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:06.786       16:24:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:06.786  [2024-12-09 16:24:35.760290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:14:06.786  [2024-12-09 16:24:35.762660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.786  [2024-12-09 16:24:35.762818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.786  [2024-12-09 16:24:35.762940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.786  [2024-12-09 16:24:35.763055] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.786  [2024-12-09 16:24:35.763091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.786  [2024-12-09 16:24:35.763184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.786  [2024-12-09 16:24:35.763283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.786  [2024-12-09 16:24:35.763321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.786  [2024-12-09 16:24:35.763422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.786  [2024-12-09 16:24:35.763522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.786  [2024-12-09 16:24:35.763556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.786  [2024-12-09 16:24:35.763651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.786       16:24:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:14:06.786     16:24:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:14:07.355  [2024-12-09 16:24:36.259468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:14:07.355  [2024-12-09 16:24:36.262023] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:07.355  [2024-12-09 16:24:36.262180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:07.355  [2024-12-09 16:24:36.262302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:07.355  [2024-12-09 16:24:36.262361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:07.355  [2024-12-09 16:24:36.262438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:07.355  [2024-12-09 16:24:36.262492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:07.355  [2024-12-09 16:24:36.262583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:07.355  [2024-12-09 16:24:36.262618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:07.355  [2024-12-09 16:24:36.262672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:07.355  [2024-12-09 16:24:36.262916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:07.355  [2024-12-09 16:24:36.262959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:07.355  [2024-12-09 16:24:36.263008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:07.355     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:14:07.355     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:07.355      16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:07.355      16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:07.355      16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:07.355       16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:07.355       16:24:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:07.355       16:24:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:07.355       16:24:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:07.355     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:14:07.355     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:14:07.355     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:07.355     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:07.355     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:14:07.355     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:14:07.615     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:07.615     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:07.615     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:07.615     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:14:07.615     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:14:07.615     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:07.615     16:24:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:14:19.830     16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:14:19.830     16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:14:19.830      16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:14:19.830      16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:19.830      16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:19.830       16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:19.830       16:24:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:19.830       16:24:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:19.830       16:24:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:19.830     16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:14:19.830     16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:14:19.830    16:24:48 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.09
00:14:19.830    16:24:48 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.09
00:14:19.830    16:24:48 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:14:19.830   16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.09
00:14:19.830   16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.09 2
00:14:19.830  remove_attach_helper took 45.09s to complete (handling 2 nvme drive(s)) 16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d
00:14:19.830   16:24:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:19.830   16:24:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:19.830   16:24:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:19.830   16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:14:19.830   16:24:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:19.830   16:24:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:19.830   16:24:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:19.830   16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true
00:14:19.830   16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:14:19.830    16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:14:19.830    16:24:48 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:14:19.830    16:24:48 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:14:19.830    16:24:48 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:14:19.830    16:24:48 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:14:19.830     16:24:48 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:14:19.830     16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:14:19.830     16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:14:19.830     16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:14:19.830     16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:14:19.830     16:24:48 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:14:26.401     16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:14:26.401     16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:26.401     16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:26.401     16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:26.401     16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:26.401     16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:14:26.401     16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:26.401      16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:26.401      16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:26.401       16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:26.401       16:24:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:26.401       16:24:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:26.401  [2024-12-09 16:24:54.828019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:14:26.401      16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:26.401  [2024-12-09 16:24:54.829603] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:26.401  [2024-12-09 16:24:54.829638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:26.401  [2024-12-09 16:24:54.829655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:26.401  [2024-12-09 16:24:54.829679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:26.401  [2024-12-09 16:24:54.829691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:26.401  [2024-12-09 16:24:54.829706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:26.401  [2024-12-09 16:24:54.829719] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:26.401  [2024-12-09 16:24:54.829733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:26.401  [2024-12-09 16:24:54.829744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:26.401  [2024-12-09 16:24:54.829762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:26.401  [2024-12-09 16:24:54.829774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:26.401  [2024-12-09 16:24:54.829791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:26.401       16:24:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:26.401     16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:14:26.401     16:24:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:14:26.401  [2024-12-09 16:24:55.327206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:14:26.401  [2024-12-09 16:24:55.329502] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:26.401  [2024-12-09 16:24:55.329539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:26.401  [2024-12-09 16:24:55.329558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:26.401  [2024-12-09 16:24:55.329578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:26.401  [2024-12-09 16:24:55.329592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:26.401  [2024-12-09 16:24:55.329604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:26.401  [2024-12-09 16:24:55.329619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:26.401  [2024-12-09 16:24:55.329630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:26.401  [2024-12-09 16:24:55.329644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:26.401  [2024-12-09 16:24:55.329657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:26.401  [2024-12-09 16:24:55.329671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:26.401  [2024-12-09 16:24:55.329682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:26.401     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:14:26.401     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:26.401      16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:26.401      16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:26.401      16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:26.401       16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:26.401       16:24:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:26.401       16:24:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:26.401       16:24:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:26.401     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:14:26.401     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:14:26.401     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:26.401     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:26.401     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:14:26.660     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:14:26.660     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:26.660     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:26.660     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:26.660     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:14:26.660     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:14:26.660     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:26.660     16:24:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:14:38.939      16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:14:38.939      16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:38.939      16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:38.939       16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:38.939       16:25:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:38.939       16:25:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:38.939       16:25:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:38.939      16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:38.939      16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:38.939      16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:38.939       16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:38.939       16:25:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:38.939       16:25:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:38.939  [2024-12-09 16:25:07.906951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:14:38.939  [2024-12-09 16:25:07.908515] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:38.939  [2024-12-09 16:25:07.908557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:38.939  [2024-12-09 16:25:07.908574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:38.939  [2024-12-09 16:25:07.908599] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:38.939  [2024-12-09 16:25:07.908611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:38.939  [2024-12-09 16:25:07.908625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:38.939  [2024-12-09 16:25:07.908638] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:38.939  [2024-12-09 16:25:07.908653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:38.939  [2024-12-09 16:25:07.908664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:38.939  [2024-12-09 16:25:07.908682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:38.939  [2024-12-09 16:25:07.908693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:38.939  [2024-12-09 16:25:07.908707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:38.939       16:25:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:14:38.939     16:25:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:14:39.199  [2024-12-09 16:25:08.306294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:14:39.199  [2024-12-09 16:25:08.307808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:39.199  [2024-12-09 16:25:08.307843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:39.199  [2024-12-09 16:25:08.307861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:39.199  [2024-12-09 16:25:08.307879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:39.199  [2024-12-09 16:25:08.307921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:39.199  [2024-12-09 16:25:08.307934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:39.199  [2024-12-09 16:25:08.307950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:39.199  [2024-12-09 16:25:08.307961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:39.199  [2024-12-09 16:25:08.307975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:39.199  [2024-12-09 16:25:08.307987] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:39.199  [2024-12-09 16:25:08.308001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:39.199  [2024-12-09 16:25:08.308012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:39.459     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:14:39.459     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:39.459      16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:39.459      16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:39.459      16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:39.459       16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:39.459       16:25:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:39.459       16:25:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:39.459       16:25:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:39.459     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:14:39.459     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:14:39.459     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:39.459     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:39.459     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:14:39.718     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:14:39.718     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:39.718     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:39.718     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:39.718     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:14:39.718     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:14:39.718     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:39.718     16:25:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:14:52.015      16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:14:52.015      16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:52.015      16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:52.015       16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:52.015       16:25:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:52.015       16:25:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:52.015       16:25:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:52.015  [2024-12-09 16:25:20.886058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:14:52.015  [2024-12-09 16:25:20.887879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:52.015  [2024-12-09 16:25:20.887938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:52.015  [2024-12-09 16:25:20.887955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:52.015  [2024-12-09 16:25:20.887980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:52.015  [2024-12-09 16:25:20.887992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:52.015  [2024-12-09 16:25:20.888007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:52.015  [2024-12-09 16:25:20.888020] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:52.015  [2024-12-09 16:25:20.888037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:52.015  [2024-12-09 16:25:20.888049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:52.015  [2024-12-09 16:25:20.888064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:52.015  [2024-12-09 16:25:20.888074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:52.015  [2024-12-09 16:25:20.888088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:52.015      16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:52.015      16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:52.015       16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:52.015      16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:52.015       16:25:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:52.015       16:25:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:52.015       16:25:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:14:52.015     16:25:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:14:52.275  [2024-12-09 16:25:21.285414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:14:52.275  [2024-12-09 16:25:21.287025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:52.275  [2024-12-09 16:25:21.287058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:52.275  [2024-12-09 16:25:21.287077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:52.275  [2024-12-09 16:25:21.287097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:52.275  [2024-12-09 16:25:21.287111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:52.275  [2024-12-09 16:25:21.287123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:52.275  [2024-12-09 16:25:21.287139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:52.275  [2024-12-09 16:25:21.287150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:52.275  [2024-12-09 16:25:21.287164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:52.276  [2024-12-09 16:25:21.287177] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:52.276  [2024-12-09 16:25:21.287194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:52.276  [2024-12-09 16:25:21.287205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:52.535     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:14:52.535     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:52.535      16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:52.535      16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:52.535      16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:52.535       16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:52.535       16:25:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:52.535       16:25:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:52.535       16:25:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:52.535     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:14:52.535     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:14:52.535     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:52.535     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:52.535     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:14:52.795     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:14:52.795     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:52.795     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:52.795     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:52.795     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:14:52.795     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:14:52.795     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:52.795     16:25:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:15:05.010     16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:15:05.010     16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:15:05.010      16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:15:05.010      16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:15:05.010      16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:15:05.010       16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:15:05.010       16:25:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:05.010       16:25:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:05.010       16:25:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:05.010     16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:15:05.010     16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:15:05.010    16:25:33 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.14
00:15:05.010    16:25:33 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.14
00:15:05.010    16:25:33 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:15:05.010   16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.14
00:15:05.010   16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.14 2
00:15:05.010  remove_attach_helper took 45.14s to complete (handling 2 nvme drive(s)) 16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT
00:15:05.010   16:25:33 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69752
00:15:05.010   16:25:33 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69752 ']'
00:15:05.010   16:25:33 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69752
00:15:05.010    16:25:33 sw_hotplug -- common/autotest_common.sh@959 -- # uname
00:15:05.010   16:25:33 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:05.010    16:25:33 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69752
00:15:05.010   16:25:33 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:05.010   16:25:33 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:05.010  killing process with pid 69752
00:15:05.010   16:25:33 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69752'
00:15:05.010   16:25:33 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69752
00:15:05.011   16:25:33 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69752
00:15:07.548   16:25:36 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:15:07.807  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:15:08.374  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:15:08.374  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:15:08.374  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:15:08.374  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:15:08.634  
00:15:08.634  real	2m33.668s
00:15:08.634  user	1m50.742s
00:15:08.634  sys	0m23.051s
00:15:08.634   16:25:37 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:08.634   16:25:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:08.634  ************************************
00:15:08.634  END TEST sw_hotplug
00:15:08.634  ************************************
00:15:08.634   16:25:37  -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]]
00:15:08.634   16:25:37  -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh
00:15:08.634   16:25:37  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:08.634   16:25:37  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:08.634   16:25:37  -- common/autotest_common.sh@10 -- # set +x
00:15:08.634  ************************************
00:15:08.634  START TEST nvme_xnvme
00:15:08.634  ************************************
00:15:08.634   16:25:37 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh
00:15:08.634  * Looking for test storage...
00:15:08.634  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:15:08.634     16:25:37 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:15:08.634      16:25:37 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version
00:15:08.634      16:25:37 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:15:08.896     16:25:37 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-:
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-:
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<'
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@345 -- # : 1
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:08.896      16:25:37 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1
00:15:08.896      16:25:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=1
00:15:08.896      16:25:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:08.896      16:25:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 1
00:15:08.896     16:25:37 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1
00:15:08.896      16:25:37 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2
00:15:08.897      16:25:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=2
00:15:08.897      16:25:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:08.897      16:25:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 2
00:15:08.897     16:25:37 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2
00:15:08.897     16:25:37 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:08.897     16:25:37 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:08.897     16:25:37 nvme_xnvme -- scripts/common.sh@368 -- # return 0
00:15:08.897     16:25:37 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:08.897     16:25:37 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:15:08.897  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:08.897  		--rc genhtml_branch_coverage=1
00:15:08.897  		--rc genhtml_function_coverage=1
00:15:08.897  		--rc genhtml_legend=1
00:15:08.897  		--rc geninfo_all_blocks=1
00:15:08.897  		--rc geninfo_unexecuted_blocks=1
00:15:08.897  		
00:15:08.897  		'
00:15:08.897     16:25:37 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:15:08.897  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:08.897  		--rc genhtml_branch_coverage=1
00:15:08.897  		--rc genhtml_function_coverage=1
00:15:08.897  		--rc genhtml_legend=1
00:15:08.897  		--rc geninfo_all_blocks=1
00:15:08.897  		--rc geninfo_unexecuted_blocks=1
00:15:08.897  		
00:15:08.897  		'
00:15:08.897     16:25:37 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:15:08.897  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:08.897  		--rc genhtml_branch_coverage=1
00:15:08.897  		--rc genhtml_function_coverage=1
00:15:08.897  		--rc genhtml_legend=1
00:15:08.897  		--rc geninfo_all_blocks=1
00:15:08.897  		--rc geninfo_unexecuted_blocks=1
00:15:08.897  		
00:15:08.897  		'
00:15:08.897     16:25:37 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:15:08.897  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:08.897  		--rc genhtml_branch_coverage=1
00:15:08.897  		--rc genhtml_function_coverage=1
00:15:08.897  		--rc genhtml_legend=1
00:15:08.897  		--rc geninfo_all_blocks=1
00:15:08.897  		--rc geninfo_unexecuted_blocks=1
00:15:08.897  		
00:15:08.897  		'
00:15:08.897    16:25:37 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh
00:15:08.897     16:25:37 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:15:08.897      16:25:37 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:15:08.897      16:25:37 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e
00:15:08.897      16:25:37 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:15:08.897      16:25:37 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob
00:15:08.897      16:25:37 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:15:08.897      16:25:37 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:15:08.897      16:25:37 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:15:08.897      16:25:37 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:15:08.897       16:25:37 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n
00:15:08.897      16:25:37 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:15:08.897         16:25:37 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:15:08.897        16:25:37 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:15:08.897       16:25:37 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:15:08.897       16:25:37 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:15:08.897       16:25:37 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:15:08.898       16:25:37 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:15:08.898       16:25:37 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:15:08.898       16:25:37 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:15:08.898       16:25:37 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:15:08.898       16:25:37 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:15:08.898       16:25:37 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:15:08.898       16:25:37 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:15:08.898       16:25:37 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:15:08.898       16:25:37 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:15:08.898       16:25:37 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:15:08.898  #define SPDK_CONFIG_H
00:15:08.898  #define SPDK_CONFIG_AIO_FSDEV 1
00:15:08.898  #define SPDK_CONFIG_APPS 1
00:15:08.898  #define SPDK_CONFIG_ARCH native
00:15:08.898  #define SPDK_CONFIG_ASAN 1
00:15:08.898  #undef SPDK_CONFIG_AVAHI
00:15:08.898  #undef SPDK_CONFIG_CET
00:15:08.898  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:15:08.898  #define SPDK_CONFIG_COVERAGE 1
00:15:08.898  #define SPDK_CONFIG_CROSS_PREFIX 
00:15:08.898  #undef SPDK_CONFIG_CRYPTO
00:15:08.898  #undef SPDK_CONFIG_CRYPTO_MLX5
00:15:08.898  #undef SPDK_CONFIG_CUSTOMOCF
00:15:08.898  #undef SPDK_CONFIG_DAOS
00:15:08.898  #define SPDK_CONFIG_DAOS_DIR 
00:15:08.898  #define SPDK_CONFIG_DEBUG 1
00:15:08.898  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:15:08.898  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build
00:15:08.898  #define SPDK_CONFIG_DPDK_INC_DIR 
00:15:08.898  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:15:08.898  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:15:08.898  #undef SPDK_CONFIG_DPDK_UADK
00:15:08.898  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:15:08.898  #define SPDK_CONFIG_EXAMPLES 1
00:15:08.898  #undef SPDK_CONFIG_FC
00:15:08.898  #define SPDK_CONFIG_FC_PATH 
00:15:08.898  #define SPDK_CONFIG_FIO_PLUGIN 1
00:15:08.898  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:15:08.898  #define SPDK_CONFIG_FSDEV 1
00:15:08.898  #undef SPDK_CONFIG_FUSE
00:15:08.898  #undef SPDK_CONFIG_FUZZER
00:15:08.898  #define SPDK_CONFIG_FUZZER_LIB 
00:15:08.898  #undef SPDK_CONFIG_GOLANG
00:15:08.898  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:15:08.898  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:15:08.898  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:15:08.898  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:15:08.898  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:15:08.898  #undef SPDK_CONFIG_HAVE_LIBBSD
00:15:08.898  #undef SPDK_CONFIG_HAVE_LZ4
00:15:08.898  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:15:08.898  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:15:08.898  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:15:08.898  #define SPDK_CONFIG_IDXD 1
00:15:08.898  #define SPDK_CONFIG_IDXD_KERNEL 1
00:15:08.898  #undef SPDK_CONFIG_IPSEC_MB
00:15:08.898  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:15:08.898  #define SPDK_CONFIG_ISAL 1
00:15:08.898  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:15:08.898  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:15:08.898  #define SPDK_CONFIG_LIBDIR 
00:15:08.898  #undef SPDK_CONFIG_LTO
00:15:08.898  #define SPDK_CONFIG_MAX_LCORES 128
00:15:08.898  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:15:08.898  #define SPDK_CONFIG_NVME_CUSE 1
00:15:08.898  #undef SPDK_CONFIG_OCF
00:15:08.898  #define SPDK_CONFIG_OCF_PATH 
00:15:08.898  #define SPDK_CONFIG_OPENSSL_PATH 
00:15:08.898  #undef SPDK_CONFIG_PGO_CAPTURE
00:15:08.898  #define SPDK_CONFIG_PGO_DIR 
00:15:08.898  #undef SPDK_CONFIG_PGO_USE
00:15:08.898  #define SPDK_CONFIG_PREFIX /usr/local
00:15:08.898  #undef SPDK_CONFIG_RAID5F
00:15:08.898  #undef SPDK_CONFIG_RBD
00:15:08.898  #define SPDK_CONFIG_RDMA 1
00:15:08.898  #define SPDK_CONFIG_RDMA_PROV verbs
00:15:08.898  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:15:08.898  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:15:08.898  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:15:08.898  #define SPDK_CONFIG_SHARED 1
00:15:08.898  #undef SPDK_CONFIG_SMA
00:15:08.898  #define SPDK_CONFIG_TESTS 1
00:15:08.898  #undef SPDK_CONFIG_TSAN
00:15:08.898  #define SPDK_CONFIG_UBLK 1
00:15:08.898  #define SPDK_CONFIG_UBSAN 1
00:15:08.898  #undef SPDK_CONFIG_UNIT_TESTS
00:15:08.898  #undef SPDK_CONFIG_URING
00:15:08.898  #define SPDK_CONFIG_URING_PATH 
00:15:08.898  #undef SPDK_CONFIG_URING_ZNS
00:15:08.898  #undef SPDK_CONFIG_USDT
00:15:08.898  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:15:08.898  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:15:08.898  #undef SPDK_CONFIG_VFIO_USER
00:15:08.898  #define SPDK_CONFIG_VFIO_USER_DIR 
00:15:08.898  #define SPDK_CONFIG_VHOST 1
00:15:08.898  #define SPDK_CONFIG_VIRTIO 1
00:15:08.898  #undef SPDK_CONFIG_VTUNE
00:15:08.898  #define SPDK_CONFIG_VTUNE_DIR 
00:15:08.898  #define SPDK_CONFIG_WERROR 1
00:15:08.898  #define SPDK_CONFIG_WPDK_DIR 
00:15:08.898  #define SPDK_CONFIG_XNVME 1
00:15:08.898  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:15:08.898       16:25:37 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:15:08.898       16:25:37 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob
00:15:08.898       16:25:37 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:08.898       16:25:37 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:08.898       16:25:37 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:08.898        16:25:37 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:08.898        16:25:37 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:08.898        16:25:37 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:08.898        16:25:37 nvme_xnvme -- paths/export.sh@5 -- # export PATH
00:15:08.898        16:25:37 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:15:08.898         16:25:37 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:15:08.898        16:25:37 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:15:08.898        16:25:37 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:15:08.898        16:25:37 nvme_xnvme -- pm/common@68 -- # uname -s
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@76 -- # SUDO[0]=
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E'
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]]
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]]
00:15:08.898       16:25:37 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@70 -- # :
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0
00:15:08.898      16:25:37 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@126 -- # :
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@140 -- # :
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@142 -- # : true
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@154 -- # :
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@169 -- # :
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@206 -- # cat
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:15:08.899      16:25:37 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV=
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt=
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:15:08.900      16:25:37 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind=
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind=
00:15:08.900       16:25:38 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE=
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 71102 ]]
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 71102
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:15:08.900       16:25:38 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.O5m5cX
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.O5m5cX/tests/xnvme /tmp/spdk.O5m5cX
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:15:08.900       16:25:38 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T
00:15:08.900       16:25:38 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976186880
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592092672
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261653504
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266417152
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493771776
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506567680
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976186880
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592092672
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266273792
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97954480128
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1748299776
00:15:08.900      16:25:38 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:15:08.901      16:25:38 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:15:08.901  * Looking for test storage...
00:15:08.901      16:25:38 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size
00:15:08.901      16:25:38 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:15:08.901       16:25:38 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:15:08.901       16:25:38 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:15:08.901      16:25:38 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home
00:15:08.901      16:25:38 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976186880
00:15:08.901      16:25:38 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:15:08.901      16:25:38 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:15:08.901      16:25:38 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]]
00:15:08.901      16:25:38 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]]
00:15:08.901      16:25:38 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]]
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:15:09.161  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1703 -- # true
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@27 -- # exec
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@29 -- # exec
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:15:09.161       16:25:38 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:15:09.161       16:25:38 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-:
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-:
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<'
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@345 -- # : 1
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:09.161       16:25:38 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1
00:15:09.161       16:25:38 nvme_xnvme -- scripts/common.sh@353 -- # local d=1
00:15:09.161       16:25:38 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:09.161       16:25:38 nvme_xnvme -- scripts/common.sh@355 -- # echo 1
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1
00:15:09.161       16:25:38 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2
00:15:09.161       16:25:38 nvme_xnvme -- scripts/common.sh@353 -- # local d=2
00:15:09.161       16:25:38 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:09.161       16:25:38 nvme_xnvme -- scripts/common.sh@355 -- # echo 2
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@368 -- # return 0
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:15:09.161  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:09.161  		--rc genhtml_branch_coverage=1
00:15:09.161  		--rc genhtml_function_coverage=1
00:15:09.161  		--rc genhtml_legend=1
00:15:09.161  		--rc geninfo_all_blocks=1
00:15:09.161  		--rc geninfo_unexecuted_blocks=1
00:15:09.161  		
00:15:09.161  		'
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:15:09.161  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:09.161  		--rc genhtml_branch_coverage=1
00:15:09.161  		--rc genhtml_function_coverage=1
00:15:09.161  		--rc genhtml_legend=1
00:15:09.161  		--rc geninfo_all_blocks=1
00:15:09.161  		--rc geninfo_unexecuted_blocks=1
00:15:09.161  		
00:15:09.161  		'
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:15:09.161  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:09.161  		--rc genhtml_branch_coverage=1
00:15:09.161  		--rc genhtml_function_coverage=1
00:15:09.161  		--rc genhtml_legend=1
00:15:09.161  		--rc geninfo_all_blocks=1
00:15:09.161  		--rc geninfo_unexecuted_blocks=1
00:15:09.161  		
00:15:09.161  		'
00:15:09.161      16:25:38 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:15:09.161  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:09.161  		--rc genhtml_branch_coverage=1
00:15:09.161  		--rc genhtml_function_coverage=1
00:15:09.161  		--rc genhtml_legend=1
00:15:09.161  		--rc geninfo_all_blocks=1
00:15:09.161  		--rc geninfo_unexecuted_blocks=1
00:15:09.161  		
00:15:09.161  		'
00:15:09.161     16:25:38 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:09.161      16:25:38 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:09.161       16:25:38 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:09.161       16:25:38 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:09.161       16:25:38 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:09.161       16:25:38 nvme_xnvme -- paths/export.sh@5 -- # export PATH
00:15:09.161       16:25:38 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd')
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite')
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite')
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes')
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite')
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite')
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite')
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1')
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true')
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false')
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme
00:15:09.161    16:25:38 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:15:09.730  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:15:09.989  Waiting for block devices as requested
00:15:09.989  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:15:10.248  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:15:10.248  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:15:10.508  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:15:15.788  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:15:15.788    16:25:44 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme
00:15:15.788     16:25:44 nvme_xnvme -- xnvme/common.sh@74 -- # nproc
00:15:15.788    16:25:44 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10
00:15:16.054    16:25:45 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme
00:15:16.054    16:25:45 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*)
00:15:16.054    16:25:45 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1
00:15:16.054    16:25:45 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:15:16.054    16:25:45 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:15:16.321  No valid GPT data, bailing
00:15:16.321     16:25:45 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:15:16.321    16:25:45 nvme_xnvme -- scripts/common.sh@394 -- # pt=
00:15:16.321    16:25:45 nvme_xnvme -- scripts/common.sh@395 -- # return 1
00:15:16.321    16:25:45 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1
00:15:16.321    16:25:45 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1
00:15:16.321    16:25:45 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1
00:15:16.321    16:25:45 nvme_xnvme -- xnvme/common.sh@83 -- # return 0
00:15:16.321   16:25:45 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT
00:15:16.321   16:25:45 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}"
00:15:16.321   16:25:45 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio
00:15:16.321   16:25:45 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1
00:15:16.321   16:25:45 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1
00:15:16.321   16:25:45 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev
00:15:16.321   16:25:45 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:15:16.321   16:25:45 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false
00:15:16.321   16:25:45 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false
00:15:16.321   16:25:45 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:15:16.321   16:25:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:16.321   16:25:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:16.321   16:25:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:16.321  ************************************
00:15:16.321  START TEST xnvme_rpc
00:15:16.321  ************************************
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71499
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71499
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71499 ']'
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:16.321  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:16.321   16:25:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:16.321  [2024-12-09 16:25:45.426141] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:15:16.321  [2024-12-09 16:25:45.426293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71499 ]
00:15:16.580  [2024-12-09 16:25:45.608203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:16.580  [2024-12-09 16:25:45.712411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:17.516   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:17.516   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:15:17.516   16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio ''
00:15:17.516   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.516   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:17.516  xnvme_bdev
00:15:17.516   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.516   16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.516   16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.516   16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]]
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.516    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:17.775    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]]
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71499
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71499 ']'
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71499
00:15:17.775    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:17.775    16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71499
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:17.775  killing process with pid 71499
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71499'
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71499
00:15:17.775   16:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71499
00:15:20.311  
00:15:20.311  real	0m3.724s
00:15:20.311  user	0m3.765s
00:15:20.311  sys	0m0.545s
00:15:20.311   16:25:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:20.311  ************************************
00:15:20.311  END TEST xnvme_rpc
00:15:20.311  ************************************
00:15:20.311   16:25:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:20.311   16:25:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:15:20.311   16:25:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:20.311   16:25:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:20.311   16:25:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:20.311  ************************************
00:15:20.311  START TEST xnvme_bdevperf
00:15:20.311  ************************************
00:15:20.311   16:25:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:15:20.311   16:25:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:15:20.311   16:25:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio
00:15:20.311   16:25:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:20.311   16:25:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:15:20.311    16:25:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:15:20.311    16:25:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:15:20.311    16:25:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:20.311  {
00:15:20.311    "subsystems": [
00:15:20.311      {
00:15:20.311        "subsystem": "bdev",
00:15:20.311        "config": [
00:15:20.311          {
00:15:20.311            "params": {
00:15:20.311              "io_mechanism": "libaio",
00:15:20.311              "conserve_cpu": false,
00:15:20.311              "filename": "/dev/nvme0n1",
00:15:20.311              "name": "xnvme_bdev"
00:15:20.311            },
00:15:20.311            "method": "bdev_xnvme_create"
00:15:20.311          },
00:15:20.311          {
00:15:20.311            "method": "bdev_wait_for_examine"
00:15:20.311          }
00:15:20.311        ]
00:15:20.311      }
00:15:20.311    ]
00:15:20.311  }
00:15:20.311  [2024-12-09 16:25:49.203964] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:15:20.311  [2024-12-09 16:25:49.204093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71573 ]
00:15:20.311  [2024-12-09 16:25:49.382772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:20.570  [2024-12-09 16:25:49.489734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:20.829  Running I/O for 5 seconds...
00:15:22.702      42780.00 IOPS,   167.11 MiB/s
[2024-12-09T16:25:53.261Z]     41271.50 IOPS,   161.22 MiB/s
[2024-12-09T16:25:53.858Z]     39583.33 IOPS,   154.62 MiB/s
[2024-12-09T16:25:55.244Z]     40530.50 IOPS,   158.32 MiB/s
00:15:26.065                                                                                                  Latency(us)
00:15:26.065  
[2024-12-09T16:25:55.244Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:26.065  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:15:26.065  	 xnvme_bdev          :       5.00   41037.11     160.30       0.00     0.00    1556.06     166.14    3816.35
00:15:26.065  
[2024-12-09T16:25:55.244Z]  ===================================================================================================================
00:15:26.065  
[2024-12-09T16:25:55.244Z]  Total                       :              41037.11     160.30       0.00     0.00    1556.06     166.14    3816.35
00:15:27.002   16:25:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:27.002   16:25:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:15:27.002    16:25:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:15:27.002    16:25:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:15:27.002    16:25:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:27.002  {
00:15:27.002    "subsystems": [
00:15:27.002      {
00:15:27.002        "subsystem": "bdev",
00:15:27.002        "config": [
00:15:27.002          {
00:15:27.002            "params": {
00:15:27.002              "io_mechanism": "libaio",
00:15:27.002              "conserve_cpu": false,
00:15:27.002              "filename": "/dev/nvme0n1",
00:15:27.002              "name": "xnvme_bdev"
00:15:27.002            },
00:15:27.002            "method": "bdev_xnvme_create"
00:15:27.002          },
00:15:27.002          {
00:15:27.002            "method": "bdev_wait_for_examine"
00:15:27.002          }
00:15:27.002        ]
00:15:27.002      }
00:15:27.002    ]
00:15:27.002  }
00:15:27.002  [2024-12-09 16:25:56.012378] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:15:27.002  [2024-12-09 16:25:56.012505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71654 ]
00:15:27.261  [2024-12-09 16:25:56.195125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:27.261  [2024-12-09 16:25:56.300107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:27.521  Running I/O for 5 seconds...
00:15:29.842      41768.00 IOPS,   163.16 MiB/s
[2024-12-09T16:25:59.960Z]     41938.00 IOPS,   163.82 MiB/s
[2024-12-09T16:26:00.898Z]     39752.67 IOPS,   155.28 MiB/s
[2024-12-09T16:26:01.836Z]     37864.50 IOPS,   147.91 MiB/s
[2024-12-09T16:26:01.836Z]     36703.20 IOPS,   143.37 MiB/s
00:15:32.657                                                                                                  Latency(us)
00:15:32.657  
[2024-12-09T16:26:01.836Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:32.657  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:15:32.657  	 xnvme_bdev          :       5.01   36677.72     143.27       0.00     0.00    1741.22     480.33    6395.68
00:15:32.657  
[2024-12-09T16:26:01.836Z]  ===================================================================================================================
00:15:32.657  
[2024-12-09T16:26:01.836Z]  Total                       :              36677.72     143.27       0.00     0.00    1741.22     480.33    6395.68
00:15:33.596  
00:15:33.597  real	0m13.629s
00:15:33.597  user	0m4.691s
00:15:33.597  sys	0m6.038s
00:15:33.597   16:26:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:33.597  ************************************
00:15:33.597  END TEST xnvme_bdevperf
00:15:33.597  ************************************
00:15:33.597   16:26:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:33.857   16:26:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:15:33.857   16:26:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:33.857   16:26:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:33.857   16:26:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:33.857  ************************************
00:15:33.857  START TEST xnvme_fio_plugin
00:15:33.857  ************************************
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:33.857    16:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:15:33.857    16:26:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:15:33.857    16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:15:33.857    16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:33.857    16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:15:33.857    16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:15:33.857  {
00:15:33.857    "subsystems": [
00:15:33.857      {
00:15:33.857        "subsystem": "bdev",
00:15:33.857        "config": [
00:15:33.857          {
00:15:33.857            "params": {
00:15:33.857              "io_mechanism": "libaio",
00:15:33.857              "conserve_cpu": false,
00:15:33.857              "filename": "/dev/nvme0n1",
00:15:33.857              "name": "xnvme_bdev"
00:15:33.857            },
00:15:33.857            "method": "bdev_xnvme_create"
00:15:33.857          },
00:15:33.857          {
00:15:33.857            "method": "bdev_wait_for_examine"
00:15:33.857          }
00:15:33.857        ]
00:15:33.857      }
00:15:33.857    ]
00:15:33.857  }
00:15:33.857   16:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:34.117  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:15:34.117  fio-3.35
00:15:34.117  Starting 1 thread
00:15:40.691  
00:15:40.691  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71780: Mon Dec  9 16:26:08 2024
00:15:40.691    read: IOPS=64.9k, BW=253MiB/s (266MB/s)(1267MiB/5001msec)
00:15:40.691      slat (usec): min=4, max=1061, avg=13.31, stdev=29.42
00:15:40.691      clat (usec): min=75, max=5182, avg=615.63, stdev=372.04
00:15:40.691       lat (usec): min=131, max=5234, avg=628.94, stdev=373.81
00:15:40.691      clat percentiles (usec):
00:15:40.691       |  1.00th=[  147],  5.00th=[  253], 10.00th=[  302], 20.00th=[  375],
00:15:40.691       | 30.00th=[  433], 40.00th=[  490], 50.00th=[  545], 60.00th=[  611],
00:15:40.691       | 70.00th=[  676], 80.00th=[  766], 90.00th=[  930], 95.00th=[ 1205],
00:15:40.691       | 99.00th=[ 2245], 99.50th=[ 2737], 99.90th=[ 3621], 99.95th=[ 3884],
00:15:40.691       | 99.99th=[ 4424]
00:15:40.691     bw (  KiB/s): min=195470, max=295008, per=100.00%, avg=259732.50, stdev=28149.84, samples=10
00:15:40.691     iops        : min=48867, max=73752, avg=64933.00, stdev=7037.55, samples=10
00:15:40.691    lat (usec)   : 100=0.13%, 250=4.70%, 500=37.30%, 750=36.58%, 1000=13.22%
00:15:40.691    lat (msec)   : 2=6.70%, 4=1.34%, 10=0.04%
00:15:40.691    cpu          : usr=30.44%, sys=56.64%, ctx=19, majf=0, minf=764
00:15:40.691    IO depths    : 1=0.1%, 2=0.7%, 4=2.5%, 8=8.2%, 16=24.2%, 32=62.3%, >=64=2.1%
00:15:40.691       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:40.691       complete  : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0%
00:15:40.691       issued rwts: total=324478,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:40.691       latency   : target=0, window=0, percentile=100.00%, depth=64
00:15:40.691  
00:15:40.691  Run status group 0 (all jobs):
00:15:40.691     READ: bw=253MiB/s (266MB/s), 253MiB/s-253MiB/s (266MB/s-266MB/s), io=1267MiB (1329MB), run=5001-5001msec
00:15:41.261  -----------------------------------------------------
00:15:41.261  Suppressions used:
00:15:41.261    count      bytes template
00:15:41.261        1         11 /usr/src/fio/parse.c
00:15:41.261        1          8 libtcmalloc_minimal.so
00:15:41.261        1        904 libcrypto.so
00:15:41.261  -----------------------------------------------------
00:15:41.261  
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:15:41.261    16:26:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:41.261    16:26:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:15:41.261    16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:15:41.261    16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:41.261    16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:15:41.261    16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:15:41.261  {
00:15:41.261    "subsystems": [
00:15:41.261      {
00:15:41.261        "subsystem": "bdev",
00:15:41.261        "config": [
00:15:41.261          {
00:15:41.261            "params": {
00:15:41.261              "io_mechanism": "libaio",
00:15:41.261              "conserve_cpu": false,
00:15:41.261              "filename": "/dev/nvme0n1",
00:15:41.261              "name": "xnvme_bdev"
00:15:41.261            },
00:15:41.261            "method": "bdev_xnvme_create"
00:15:41.261          },
00:15:41.261          {
00:15:41.261            "method": "bdev_wait_for_examine"
00:15:41.261          }
00:15:41.261        ]
00:15:41.261      }
00:15:41.261    ]
00:15:41.261  }
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:15:41.261   16:26:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:41.261  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:15:41.261  fio-3.35
00:15:41.261  Starting 1 thread
00:15:47.834  
00:15:47.834  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71877: Mon Dec  9 16:26:16 2024
00:15:47.834    write: IOPS=56.2k, BW=220MiB/s (230MB/s)(1098MiB/5001msec); 0 zone resets
00:15:47.834      slat (usec): min=4, max=832, avg=15.51, stdev=25.67
00:15:47.834      clat (usec): min=78, max=5685, avg=694.72, stdev=429.14
00:15:47.834       lat (usec): min=111, max=5758, avg=710.24, stdev=432.07
00:15:47.834      clat percentiles (usec):
00:15:47.834       |  1.00th=[  163],  5.00th=[  245], 10.00th=[  310], 20.00th=[  404],
00:15:47.834       | 30.00th=[  482], 40.00th=[  553], 50.00th=[  627], 60.00th=[  701],
00:15:47.834       | 70.00th=[  783], 80.00th=[  889], 90.00th=[ 1057], 95.00th=[ 1303],
00:15:47.834       | 99.00th=[ 2507], 99.50th=[ 3097], 99.90th=[ 4228], 99.95th=[ 4555],
00:15:47.834       | 99.99th=[ 5014]
00:15:47.834     bw (  KiB/s): min=186464, max=267128, per=100.00%, avg=226421.11, stdev=23392.67, samples=9
00:15:47.834     iops        : min=46616, max=66782, avg=56605.22, stdev=5848.19, samples=9
00:15:47.834    lat (usec)   : 100=0.07%, 250=5.33%, 500=27.17%, 750=33.67%, 1000=21.29%
00:15:47.834    lat (msec)   : 2=10.48%, 4=1.85%, 10=0.15%
00:15:47.834    cpu          : usr=29.08%, sys=54.82%, ctx=42, majf=0, minf=765
00:15:47.834    IO depths    : 1=0.1%, 2=0.8%, 4=3.1%, 8=9.1%, 16=24.9%, 32=60.0%, >=64=2.0%
00:15:47.834       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:47.834       complete  : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0%
00:15:47.834       issued rwts: total=0,281178,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:47.834       latency   : target=0, window=0, percentile=100.00%, depth=64
00:15:47.834  
00:15:47.834  Run status group 0 (all jobs):
00:15:47.834    WRITE: bw=220MiB/s (230MB/s), 220MiB/s-220MiB/s (230MB/s-230MB/s), io=1098MiB (1152MB), run=5001-5001msec
00:15:48.404  -----------------------------------------------------
00:15:48.404  Suppressions used:
00:15:48.404    count      bytes template
00:15:48.404        1         11 /usr/src/fio/parse.c
00:15:48.404        1          8 libtcmalloc_minimal.so
00:15:48.404        1        904 libcrypto.so
00:15:48.404  -----------------------------------------------------
00:15:48.404  
00:15:48.404  
00:15:48.404  real	0m14.680s
00:15:48.404  user	0m6.567s
00:15:48.404  sys	0m6.334s
00:15:48.404   16:26:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:48.404   16:26:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:48.404  ************************************
00:15:48.404  END TEST xnvme_fio_plugin
00:15:48.404  ************************************
00:15:48.404   16:26:17 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:15:48.404   16:26:17 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true
00:15:48.404   16:26:17 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true
00:15:48.404   16:26:17 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:15:48.404   16:26:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:48.404   16:26:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:48.404   16:26:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:48.404  ************************************
00:15:48.404  START TEST xnvme_rpc
00:15:48.404  ************************************
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71963
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71963
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71963 ']'
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:48.404  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:48.404   16:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:48.664  [2024-12-09 16:26:17.679961] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:15:48.664  [2024-12-09 16:26:17.680077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71963 ]
00:15:48.923  [2024-12-09 16:26:17.861445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:48.923  [2024-12-09 16:26:17.968790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:49.863   16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:49.863   16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:15:49.863   16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c
00:15:49.863   16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:49.863   16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:49.863  xnvme_bdev
00:15:49.863   16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:49.863   16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:49.863   16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:15:49.863    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:15:49.864    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:49.864    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:49.864    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:49.864    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:49.864   16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]]
00:15:49.864    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:15:49.864    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:49.864    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:49.864    16:26:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:15:49.864    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:49.864    16:26:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:49.864   16:26:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]]
00:15:49.864   16:26:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:15:49.864   16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:49.864   16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:49.864   16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:49.864   16:26:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71963
00:15:49.864   16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71963 ']'
00:15:49.864   16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71963
00:15:49.864    16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:15:49.864   16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:49.864    16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71963
00:15:50.123   16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:50.123   16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:50.123  killing process with pid 71963
00:15:50.123   16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71963'
00:15:50.123   16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71963
00:15:50.123   16:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71963
00:15:52.662  
00:15:52.662  real	0m3.776s
00:15:52.662  user	0m3.814s
00:15:52.662  sys	0m0.527s
00:15:52.662   16:26:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:52.662   16:26:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:52.662  ************************************
00:15:52.662  END TEST xnvme_rpc
00:15:52.662  ************************************
00:15:52.662   16:26:21 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:15:52.662   16:26:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:52.662   16:26:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:52.662   16:26:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:52.662  ************************************
00:15:52.662  START TEST xnvme_bdevperf
00:15:52.662  ************************************
00:15:52.662   16:26:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:15:52.662   16:26:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:15:52.662   16:26:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio
00:15:52.662   16:26:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:52.662   16:26:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:15:52.662    16:26:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:15:52.662    16:26:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:15:52.662    16:26:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:52.662  {
00:15:52.662    "subsystems": [
00:15:52.662      {
00:15:52.662        "subsystem": "bdev",
00:15:52.662        "config": [
00:15:52.663          {
00:15:52.663            "params": {
00:15:52.663              "io_mechanism": "libaio",
00:15:52.663              "conserve_cpu": true,
00:15:52.663              "filename": "/dev/nvme0n1",
00:15:52.663              "name": "xnvme_bdev"
00:15:52.663            },
00:15:52.663            "method": "bdev_xnvme_create"
00:15:52.663          },
00:15:52.663          {
00:15:52.663            "method": "bdev_wait_for_examine"
00:15:52.663          }
00:15:52.663        ]
00:15:52.663      }
00:15:52.663    ]
00:15:52.663  }
00:15:52.663  [2024-12-09 16:26:21.526128] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:15:52.663  [2024-12-09 16:26:21.526235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72043 ]
00:15:52.663  [2024-12-09 16:26:21.706135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:52.663  [2024-12-09 16:26:21.815126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:53.232  Running I/O for 5 seconds...
00:15:55.109      31830.00 IOPS,   124.34 MiB/s
[2024-12-09T16:26:25.229Z]     31758.00 IOPS,   124.05 MiB/s
[2024-12-09T16:26:26.610Z]     31677.33 IOPS,   123.74 MiB/s
[2024-12-09T16:26:27.549Z]     31655.00 IOPS,   123.65 MiB/s
00:15:58.370                                                                                                  Latency(us)
00:15:58.370  
[2024-12-09T16:26:27.549Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:58.370  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:15:58.370  	 xnvme_bdev          :       5.00   32482.04     126.88       0.00     0.00    1966.37     180.95    5263.94
00:15:58.370  
[2024-12-09T16:26:27.549Z]  ===================================================================================================================
00:15:58.370  
[2024-12-09T16:26:27.549Z]  Total                       :              32482.04     126.88       0.00     0.00    1966.37     180.95    5263.94
00:15:59.309   16:26:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:59.309   16:26:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:15:59.309    16:26:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:15:59.309    16:26:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:15:59.309    16:26:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:59.309  {
00:15:59.309    "subsystems": [
00:15:59.309      {
00:15:59.309        "subsystem": "bdev",
00:15:59.309        "config": [
00:15:59.309          {
00:15:59.309            "params": {
00:15:59.309              "io_mechanism": "libaio",
00:15:59.309              "conserve_cpu": true,
00:15:59.309              "filename": "/dev/nvme0n1",
00:15:59.309              "name": "xnvme_bdev"
00:15:59.309            },
00:15:59.309            "method": "bdev_xnvme_create"
00:15:59.309          },
00:15:59.309          {
00:15:59.309            "method": "bdev_wait_for_examine"
00:15:59.309          }
00:15:59.309        ]
00:15:59.309      }
00:15:59.309    ]
00:15:59.309  }
00:15:59.309  [2024-12-09 16:26:28.345134] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:15:59.309  [2024-12-09 16:26:28.345249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72123 ]
00:15:59.571  [2024-12-09 16:26:28.527175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:59.571  [2024-12-09 16:26:28.634979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:59.840  Running I/O for 5 seconds...
00:16:02.181      42587.00 IOPS,   166.36 MiB/s
[2024-12-09T16:26:32.297Z]     42828.50 IOPS,   167.30 MiB/s
[2024-12-09T16:26:33.234Z]     42482.67 IOPS,   165.95 MiB/s
[2024-12-09T16:26:34.170Z]     43020.50 IOPS,   168.05 MiB/s
[2024-12-09T16:26:34.170Z]     43953.40 IOPS,   171.69 MiB/s
00:16:04.991                                                                                                  Latency(us)
00:16:04.991  
[2024-12-09T16:26:34.170Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:04.991  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:16:04.991  	 xnvme_bdev          :       5.00   43936.74     171.63       0.00     0.00    1452.91     340.51    5211.30
00:16:04.991  
[2024-12-09T16:26:34.170Z]  ===================================================================================================================
00:16:04.991  
[2024-12-09T16:26:34.170Z]  Total                       :              43936.74     171.63       0.00     0.00    1452.91     340.51    5211.30
00:16:05.930  
00:16:05.930  real	0m13.674s
00:16:05.930  user	0m4.886s
00:16:05.930  sys	0m5.744s
00:16:05.930   16:26:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:05.930   16:26:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:05.930  ************************************
00:16:05.930  END TEST xnvme_bdevperf
00:16:05.930  ************************************
00:16:06.188   16:26:35 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:16:06.188   16:26:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:06.188   16:26:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:06.188   16:26:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:06.188  ************************************
00:16:06.188  START TEST xnvme_fio_plugin
00:16:06.188  ************************************
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:06.188    16:26:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:16:06.188    16:26:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:16:06.188    16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:16:06.188    16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:06.188    16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:16:06.188    16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:16:06.188   16:26:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:06.188  {
00:16:06.188    "subsystems": [
00:16:06.188      {
00:16:06.188        "subsystem": "bdev",
00:16:06.188        "config": [
00:16:06.188          {
00:16:06.188            "params": {
00:16:06.188              "io_mechanism": "libaio",
00:16:06.188              "conserve_cpu": true,
00:16:06.188              "filename": "/dev/nvme0n1",
00:16:06.188              "name": "xnvme_bdev"
00:16:06.188            },
00:16:06.188            "method": "bdev_xnvme_create"
00:16:06.188          },
00:16:06.188          {
00:16:06.188            "method": "bdev_wait_for_examine"
00:16:06.188          }
00:16:06.188        ]
00:16:06.188      }
00:16:06.188    ]
00:16:06.188  }
00:16:06.447  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:16:06.447  fio-3.35
00:16:06.447  Starting 1 thread
00:16:13.016  
00:16:13.016  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72249: Mon Dec  9 16:26:41 2024
00:16:13.016    read: IOPS=53.7k, BW=210MiB/s (220MB/s)(1048MiB/5001msec)
00:16:13.016      slat (usec): min=4, max=560, avg=16.26, stdev=21.60
00:16:13.016      clat (usec): min=60, max=5665, avg=724.69, stdev=456.53
00:16:13.016       lat (usec): min=107, max=5782, avg=740.94, stdev=459.93
00:16:13.016      clat percentiles (usec):
00:16:13.016       |  1.00th=[  163],  5.00th=[  241], 10.00th=[  302], 20.00th=[  408],
00:16:13.016       | 30.00th=[  490], 40.00th=[  570], 50.00th=[  660], 60.00th=[  742],
00:16:13.016       | 70.00th=[  832], 80.00th=[  938], 90.00th=[ 1123], 95.00th=[ 1369],
00:16:13.016       | 99.00th=[ 2704], 99.50th=[ 3359], 99.90th=[ 4359], 99.95th=[ 4621],
00:16:13.016       | 99.99th=[ 5145]
00:16:13.016     bw (  KiB/s): min=196096, max=227480, per=99.88%, avg=214376.00, stdev=10997.28, samples=9
00:16:13.016     iops        : min=49024, max=56870, avg=53594.00, stdev=2749.32, samples=9
00:16:13.016    lat (usec)   : 100=0.03%, 250=5.70%, 500=25.31%, 750=30.30%, 1000=22.93%
00:16:13.016    lat (msec)   : 2=13.52%, 4=1.99%, 10=0.21%
00:16:13.016    cpu          : usr=26.54%, sys=52.76%, ctx=67, majf=0, minf=764
00:16:13.016    IO depths    : 1=0.1%, 2=0.9%, 4=3.4%, 8=9.7%, 16=25.4%, 32=58.7%, >=64=1.9%
00:16:13.016       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:13.016       complete  : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0%
00:16:13.016       issued rwts: total=268357,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:13.016       latency   : target=0, window=0, percentile=100.00%, depth=64
00:16:13.016  
00:16:13.016  Run status group 0 (all jobs):
00:16:13.016     READ: bw=210MiB/s (220MB/s), 210MiB/s-210MiB/s (220MB/s-220MB/s), io=1048MiB (1099MB), run=5001-5001msec
00:16:13.584  -----------------------------------------------------
00:16:13.584  Suppressions used:
00:16:13.584    count      bytes template
00:16:13.584        1         11 /usr/src/fio/parse.c
00:16:13.584        1          8 libtcmalloc_minimal.so
00:16:13.584        1        904 libcrypto.so
00:16:13.584  -----------------------------------------------------
00:16:13.584  
00:16:13.584   16:26:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:13.584    16:26:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:16:13.584    16:26:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:13.585    16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:16:13.585    16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:13.585    16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:16:13.585    16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:16:13.585   16:26:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:13.585  {
00:16:13.585    "subsystems": [
00:16:13.585      {
00:16:13.585        "subsystem": "bdev",
00:16:13.585        "config": [
00:16:13.585          {
00:16:13.585            "params": {
00:16:13.585              "io_mechanism": "libaio",
00:16:13.585              "conserve_cpu": true,
00:16:13.585              "filename": "/dev/nvme0n1",
00:16:13.585              "name": "xnvme_bdev"
00:16:13.585            },
00:16:13.585            "method": "bdev_xnvme_create"
00:16:13.585          },
00:16:13.585          {
00:16:13.585            "method": "bdev_wait_for_examine"
00:16:13.585          }
00:16:13.585        ]
00:16:13.585      }
00:16:13.585    ]
00:16:13.585  }
00:16:13.585  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:16:13.585  fio-3.35
00:16:13.585  Starting 1 thread
00:16:20.153  
00:16:20.153  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72342: Mon Dec  9 16:26:48 2024
00:16:20.153    write: IOPS=56.6k, BW=221MiB/s (232MB/s)(1106MiB/5001msec); 0 zone resets
00:16:20.153      slat (usec): min=4, max=689, avg=15.57, stdev=30.22
00:16:20.153      clat (usec): min=17, max=6787, avg=675.07, stdev=322.53
00:16:20.153       lat (usec): min=125, max=6792, avg=690.64, stdev=320.16
00:16:20.153      clat percentiles (usec):
00:16:20.153       |  1.00th=[  149],  5.00th=[  241], 10.00th=[  297], 20.00th=[  396],
00:16:20.153       | 30.00th=[  482], 40.00th=[  570], 50.00th=[  652], 60.00th=[  734],
00:16:20.153       | 70.00th=[  824], 80.00th=[  930], 90.00th=[ 1057], 95.00th=[ 1172],
00:16:20.153       | 99.00th=[ 1467], 99.50th=[ 1860], 99.90th=[ 2999], 99.95th=[ 3490],
00:16:20.153       | 99.99th=[ 5538]
00:16:20.153     bw (  KiB/s): min=204496, max=236264, per=100.00%, avg=226914.67, stdev=9794.42, samples=9
00:16:20.153     iops        : min=51124, max=59066, avg=56728.67, stdev=2448.61, samples=9
00:16:20.153    lat (usec)   : 20=0.01%, 50=0.01%, 100=0.15%, 250=5.58%, 500=26.33%
00:16:20.153    lat (usec)   : 750=29.72%, 1000=24.19%
00:16:20.153    lat (msec)   : 2=13.63%, 4=0.38%, 10=0.02%
00:16:20.153    cpu          : usr=26.64%, sys=61.68%, ctx=10, majf=0, minf=765
00:16:20.153    IO depths    : 1=0.2%, 2=0.8%, 4=3.1%, 8=9.8%, 16=25.8%, 32=58.4%, >=64=1.9%
00:16:20.153       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:20.153       complete  : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0%
00:16:20.153       issued rwts: total=0,283243,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:20.153       latency   : target=0, window=0, percentile=100.00%, depth=64
00:16:20.153  
00:16:20.153  Run status group 0 (all jobs):
00:16:20.153    WRITE: bw=221MiB/s (232MB/s), 221MiB/s-221MiB/s (232MB/s-232MB/s), io=1106MiB (1160MB), run=5001-5001msec
00:16:20.721  -----------------------------------------------------
00:16:20.721  Suppressions used:
00:16:20.721    count      bytes template
00:16:20.721        1         11 /usr/src/fio/parse.c
00:16:20.721        1          8 libtcmalloc_minimal.so
00:16:20.721        1        904 libcrypto.so
00:16:20.721  -----------------------------------------------------
00:16:20.721  
00:16:20.721  
00:16:20.721  real	0m14.664s
00:16:20.721  user	0m6.224s
00:16:20.721  sys	0m6.500s
00:16:20.721  ************************************
00:16:20.721  END TEST xnvme_fio_plugin
00:16:20.721  ************************************
00:16:20.721   16:26:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:20.721   16:26:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:16:20.980   16:26:49 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}"
00:16:20.980   16:26:49 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring
00:16:20.980   16:26:49 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1
00:16:20.980   16:26:49 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1
00:16:20.980   16:26:49 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev
00:16:20.980   16:26:49 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:16:20.980   16:26:49 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false
00:16:20.980   16:26:49 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false
00:16:20.980   16:26:49 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:16:20.980   16:26:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:20.980   16:26:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:20.980   16:26:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:20.980  ************************************
00:16:20.980  START TEST xnvme_rpc
00:16:20.980  ************************************
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72429
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72429
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72429 ']'
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:20.980  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:20.980   16:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:20.981  [2024-12-09 16:26:50.050037] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:16:20.981  [2024-12-09 16:26:50.050378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72429 ]
00:16:21.239  [2024-12-09 16:26:50.239469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:21.240  [2024-12-09 16:26:50.342257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring ''
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:22.176  xnvme_bdev
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]]
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:16:22.176    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]]
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:22.176   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:22.435   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:22.435   16:26:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72429
00:16:22.435   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72429 ']'
00:16:22.435   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72429
00:16:22.435    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:16:22.435   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:22.435    16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72429
00:16:22.435  killing process with pid 72429
00:16:22.435   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:22.435   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:22.435   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72429'
00:16:22.435   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72429
00:16:22.435   16:26:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72429
00:16:24.972  
00:16:24.972  real	0m3.742s
00:16:24.972  user	0m3.789s
00:16:24.972  sys	0m0.569s
00:16:24.972   16:26:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:24.972  ************************************
00:16:24.972  END TEST xnvme_rpc
00:16:24.972  ************************************
00:16:24.972   16:26:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:24.972   16:26:53 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:16:24.972   16:26:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:24.972   16:26:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:24.972   16:26:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:24.972  ************************************
00:16:24.972  START TEST xnvme_bdevperf
00:16:24.972  ************************************
00:16:24.972   16:26:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:16:24.972   16:26:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:16:24.972   16:26:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring
00:16:24.972   16:26:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:24.972   16:26:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:16:24.972    16:26:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:16:24.972    16:26:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:16:24.972    16:26:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:24.972  {
00:16:24.972    "subsystems": [
00:16:24.972      {
00:16:24.972        "subsystem": "bdev",
00:16:24.972        "config": [
00:16:24.972          {
00:16:24.973            "params": {
00:16:24.973              "io_mechanism": "io_uring",
00:16:24.973              "conserve_cpu": false,
00:16:24.973              "filename": "/dev/nvme0n1",
00:16:24.973              "name": "xnvme_bdev"
00:16:24.973            },
00:16:24.973            "method": "bdev_xnvme_create"
00:16:24.973          },
00:16:24.973          {
00:16:24.973            "method": "bdev_wait_for_examine"
00:16:24.973          }
00:16:24.973        ]
00:16:24.973      }
00:16:24.973    ]
00:16:24.973  }
00:16:24.973  [2024-12-09 16:26:53.843859] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:16:24.973  [2024-12-09 16:26:53.844138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72509 ]
00:16:24.973  [2024-12-09 16:26:54.025254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:24.973  [2024-12-09 16:26:54.132580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:25.542  Running I/O for 5 seconds...
00:16:27.420      27215.00 IOPS,   106.31 MiB/s
[2024-12-09T16:26:57.538Z]     26895.00 IOPS,   105.06 MiB/s
[2024-12-09T16:26:58.477Z]     26023.67 IOPS,   101.65 MiB/s
[2024-12-09T16:26:59.860Z]     25217.50 IOPS,    98.51 MiB/s
[2024-12-09T16:26:59.860Z]     24972.60 IOPS,    97.55 MiB/s
00:16:30.681                                                                                                  Latency(us)
00:16:30.681  
[2024-12-09T16:26:59.860Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:30.681  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:16:30.681  	 xnvme_bdev          :       5.01   24944.73      97.44       0.00     0.00    2558.35     509.94    9896.20
00:16:30.681  
[2024-12-09T16:26:59.860Z]  ===================================================================================================================
00:16:30.681  
[2024-12-09T16:26:59.860Z]  Total                       :              24944.73      97.44       0.00     0.00    2558.35     509.94    9896.20
00:16:31.618   16:27:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:31.618   16:27:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:16:31.618    16:27:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:16:31.618    16:27:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:16:31.618    16:27:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:31.618  {
00:16:31.618    "subsystems": [
00:16:31.618      {
00:16:31.618        "subsystem": "bdev",
00:16:31.618        "config": [
00:16:31.618          {
00:16:31.618            "params": {
00:16:31.618              "io_mechanism": "io_uring",
00:16:31.618              "conserve_cpu": false,
00:16:31.618              "filename": "/dev/nvme0n1",
00:16:31.618              "name": "xnvme_bdev"
00:16:31.618            },
00:16:31.618            "method": "bdev_xnvme_create"
00:16:31.618          },
00:16:31.618          {
00:16:31.618            "method": "bdev_wait_for_examine"
00:16:31.618          }
00:16:31.618        ]
00:16:31.618      }
00:16:31.618    ]
00:16:31.618  }
00:16:31.618  [2024-12-09 16:27:00.666095] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:16:31.618  [2024-12-09 16:27:00.666841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72590 ]
00:16:31.877  [2024-12-09 16:27:00.848520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:31.877  [2024-12-09 16:27:00.952872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:32.135  Running I/O for 5 seconds...
00:16:34.454      24128.00 IOPS,    94.25 MiB/s
[2024-12-09T16:27:04.614Z]     23648.00 IOPS,    92.38 MiB/s
[2024-12-09T16:27:05.554Z]     24277.33 IOPS,    94.83 MiB/s
[2024-12-09T16:27:06.493Z]     24528.00 IOPS,    95.81 MiB/s
[2024-12-09T16:27:06.493Z]     24064.00 IOPS,    94.00 MiB/s
00:16:37.314                                                                                                  Latency(us)
00:16:37.314  
[2024-12-09T16:27:06.493Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:37.314  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:16:37.314  	 xnvme_bdev          :       5.01   24052.60      93.96       0.00     0.00    2652.88    1500.22    7527.43
00:16:37.314  
[2024-12-09T16:27:06.493Z]  ===================================================================================================================
00:16:37.314  
[2024-12-09T16:27:06.493Z]  Total                       :              24052.60      93.96       0.00     0.00    2652.88    1500.22    7527.43
00:16:38.252  
00:16:38.252  real	0m13.614s
00:16:38.252  user	0m6.721s
00:16:38.252  sys	0m6.651s
00:16:38.252  ************************************
00:16:38.252  END TEST xnvme_bdevperf
00:16:38.252  ************************************
00:16:38.252   16:27:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:38.252   16:27:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:38.252   16:27:07 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:16:38.252   16:27:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:38.252   16:27:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:38.252   16:27:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:38.511  ************************************
00:16:38.511  START TEST xnvme_fio_plugin
00:16:38.511  ************************************
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:38.511    16:27:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:16:38.511    16:27:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:16:38.511    16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:16:38.511    16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:38.511    16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:16:38.511    16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:16:38.511  {
00:16:38.511    "subsystems": [
00:16:38.511      {
00:16:38.511        "subsystem": "bdev",
00:16:38.511        "config": [
00:16:38.511          {
00:16:38.511            "params": {
00:16:38.511              "io_mechanism": "io_uring",
00:16:38.511              "conserve_cpu": false,
00:16:38.511              "filename": "/dev/nvme0n1",
00:16:38.511              "name": "xnvme_bdev"
00:16:38.511            },
00:16:38.511            "method": "bdev_xnvme_create"
00:16:38.511          },
00:16:38.511          {
00:16:38.511            "method": "bdev_wait_for_examine"
00:16:38.511          }
00:16:38.511        ]
00:16:38.511      }
00:16:38.511    ]
00:16:38.511  }
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:16:38.511   16:27:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:38.511  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:16:38.511  fio-3.35
00:16:38.511  Starting 1 thread
00:16:45.128  
00:16:45.128  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72709: Mon Dec  9 16:27:13 2024
00:16:45.128    read: IOPS=22.8k, BW=89.0MiB/s (93.4MB/s)(445MiB/5001msec)
00:16:45.128      slat (usec): min=3, max=155, avg= 7.79, stdev= 3.24
00:16:45.128      clat (usec): min=1560, max=7467, avg=2500.36, stdev=300.90
00:16:45.128       lat (usec): min=1563, max=7494, avg=2508.14, stdev=301.92
00:16:45.128      clat percentiles (usec):
00:16:45.128       |  1.00th=[ 1762],  5.00th=[ 1991], 10.00th=[ 2147], 20.00th=[ 2278],
00:16:45.128       | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573],
00:16:45.128       | 70.00th=[ 2671], 80.00th=[ 2737], 90.00th=[ 2868], 95.00th=[ 2933],
00:16:45.128       | 99.00th=[ 3064], 99.50th=[ 3097], 99.90th=[ 3326], 99.95th=[ 6915],
00:16:45.128       | 99.99th=[ 7308]
00:16:45.128     bw (  KiB/s): min=87552, max=94208, per=100.00%, avg=91192.89, stdev=2208.80, samples=9
00:16:45.128     iops        : min=21888, max=23552, avg=22798.22, stdev=552.20, samples=9
00:16:45.128    lat (msec)   : 2=5.26%, 4=94.68%, 10=0.06%
00:16:45.128    cpu          : usr=35.94%, sys=62.64%, ctx=11, majf=0, minf=762
00:16:45.128    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:16:45.128       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:45.128       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
00:16:45.128       issued rwts: total=113984,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:45.128       latency   : target=0, window=0, percentile=100.00%, depth=64
00:16:45.128  
00:16:45.128  Run status group 0 (all jobs):
00:16:45.128     READ: bw=89.0MiB/s (93.4MB/s), 89.0MiB/s-89.0MiB/s (93.4MB/s-93.4MB/s), io=445MiB (467MB), run=5001-5001msec
00:16:45.697  -----------------------------------------------------
00:16:45.697  Suppressions used:
00:16:45.697    count      bytes template
00:16:45.697        1         11 /usr/src/fio/parse.c
00:16:45.697        1          8 libtcmalloc_minimal.so
00:16:45.697        1        904 libcrypto.so
00:16:45.697  -----------------------------------------------------
00:16:45.697  
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:16:45.697    16:27:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:16:45.697    16:27:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:16:45.697    16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:16:45.697    16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:45.697    16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:16:45.697    16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:16:45.697   16:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:45.697  {
00:16:45.697    "subsystems": [
00:16:45.697      {
00:16:45.697        "subsystem": "bdev",
00:16:45.697        "config": [
00:16:45.697          {
00:16:45.697            "params": {
00:16:45.697              "io_mechanism": "io_uring",
00:16:45.697              "conserve_cpu": false,
00:16:45.697              "filename": "/dev/nvme0n1",
00:16:45.697              "name": "xnvme_bdev"
00:16:45.697            },
00:16:45.697            "method": "bdev_xnvme_create"
00:16:45.697          },
00:16:45.697          {
00:16:45.697            "method": "bdev_wait_for_examine"
00:16:45.697          }
00:16:45.697        ]
00:16:45.697      }
00:16:45.697    ]
00:16:45.697  }
00:16:45.956  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:16:45.956  fio-3.35
00:16:45.956  Starting 1 thread
00:16:52.529  
00:16:52.529  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72807: Mon Dec  9 16:27:20 2024
00:16:52.529    write: IOPS=22.9k, BW=89.4MiB/s (93.7MB/s)(447MiB/5002msec); 0 zone resets
00:16:52.529      slat (nsec): min=3047, max=79437, avg=7905.80, stdev=3412.16
00:16:52.529      clat (usec): min=1471, max=6812, avg=2482.03, stdev=304.27
00:16:52.529       lat (usec): min=1475, max=6821, avg=2489.94, stdev=305.52
00:16:52.529      clat percentiles (usec):
00:16:52.529       |  1.00th=[ 1696],  5.00th=[ 1909], 10.00th=[ 2057], 20.00th=[ 2245],
00:16:52.529       | 30.00th=[ 2343], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573],
00:16:52.529       | 70.00th=[ 2671], 80.00th=[ 2737], 90.00th=[ 2868], 95.00th=[ 2933],
00:16:52.529       | 99.00th=[ 3064], 99.50th=[ 3097], 99.90th=[ 3228], 99.95th=[ 3294],
00:16:52.529       | 99.99th=[ 3458]
00:16:52.529     bw (  KiB/s): min=86016, max=107520, per=100.00%, avg=91874.67, stdev=6793.24, samples=9
00:16:52.529     iops        : min=21504, max=26880, avg=22968.67, stdev=1698.31, samples=9
00:16:52.529    lat (msec)   : 2=7.78%, 4=92.22%, 10=0.01%
00:16:52.530    cpu          : usr=37.27%, sys=61.29%, ctx=18, majf=0, minf=763
00:16:52.530    IO depths    : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:16:52.530       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:52.530       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
00:16:52.530       issued rwts: total=0,114416,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:52.530       latency   : target=0, window=0, percentile=100.00%, depth=64
00:16:52.530  
00:16:52.530  Run status group 0 (all jobs):
00:16:52.530    WRITE: bw=89.4MiB/s (93.7MB/s), 89.4MiB/s-89.4MiB/s (93.7MB/s-93.7MB/s), io=447MiB (469MB), run=5002-5002msec
00:16:53.098  -----------------------------------------------------
00:16:53.098  Suppressions used:
00:16:53.098    count      bytes template
00:16:53.098        1         11 /usr/src/fio/parse.c
00:16:53.098        1          8 libtcmalloc_minimal.so
00:16:53.098        1        904 libcrypto.so
00:16:53.098  -----------------------------------------------------
00:16:53.098  
00:16:53.098  ************************************
00:16:53.098  END TEST xnvme_fio_plugin
00:16:53.098  ************************************
00:16:53.098  
00:16:53.098  real	0m14.628s
00:16:53.098  user	0m7.461s
00:16:53.098  sys	0m6.767s
00:16:53.098   16:27:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:53.098   16:27:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:16:53.098   16:27:22 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:16:53.098   16:27:22 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true
00:16:53.099   16:27:22 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true
00:16:53.099   16:27:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:16:53.099   16:27:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:53.099   16:27:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:53.099   16:27:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:53.099  ************************************
00:16:53.099  START TEST xnvme_rpc
00:16:53.099  ************************************
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72900
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72900
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72900 ']'
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:53.099  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:53.099   16:27:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:53.099  [2024-12-09 16:27:22.239307] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:16:53.099  [2024-12-09 16:27:22.239430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72900 ]
00:16:53.357  [2024-12-09 16:27:22.419574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:53.357  [2024-12-09 16:27:22.525464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:54.295   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:54.295   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:16:54.295   16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c
00:16:54.295   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:54.295   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:54.295  xnvme_bdev
00:16:54.295   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:54.295   16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:54.295   16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:16:54.295    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:16:54.554    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:54.554    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:16:54.554    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:54.554    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:54.554    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:54.554   16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]]
00:16:54.555    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:16:54.555    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:54.555    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:54.555    16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:16:54.555    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:54.555    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]]
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72900
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72900 ']'
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72900
00:16:54.555    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:54.555    16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72900
00:16:54.555  killing process with pid 72900
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72900'
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72900
00:16:54.555   16:27:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72900
00:16:57.093  ************************************
00:16:57.093  END TEST xnvme_rpc
00:16:57.093  ************************************
00:16:57.093  
00:16:57.093  real	0m3.751s
00:16:57.093  user	0m3.817s
00:16:57.093  sys	0m0.535s
00:16:57.093   16:27:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:57.093   16:27:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:57.093   16:27:25 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:16:57.093   16:27:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:57.093   16:27:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:57.093   16:27:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:57.093  ************************************
00:16:57.093  START TEST xnvme_bdevperf
00:16:57.093  ************************************
00:16:57.093   16:27:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:16:57.093   16:27:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:16:57.093   16:27:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring
00:16:57.093   16:27:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:57.093   16:27:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:16:57.093    16:27:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:16:57.093    16:27:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:16:57.093    16:27:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:57.093  {
00:16:57.093    "subsystems": [
00:16:57.093      {
00:16:57.093        "subsystem": "bdev",
00:16:57.093        "config": [
00:16:57.093          {
00:16:57.093            "params": {
00:16:57.093              "io_mechanism": "io_uring",
00:16:57.093              "conserve_cpu": true,
00:16:57.093              "filename": "/dev/nvme0n1",
00:16:57.093              "name": "xnvme_bdev"
00:16:57.093            },
00:16:57.093            "method": "bdev_xnvme_create"
00:16:57.093          },
00:16:57.093          {
00:16:57.093            "method": "bdev_wait_for_examine"
00:16:57.093          }
00:16:57.093        ]
00:16:57.093      }
00:16:57.093    ]
00:16:57.093  }
00:16:57.093  [2024-12-09 16:27:26.056980] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:16:57.093  [2024-12-09 16:27:26.057120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72975 ]
00:16:57.093  [2024-12-09 16:27:26.233865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:57.353  [2024-12-09 16:27:26.338539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:57.612  Running I/O for 5 seconds...
00:16:59.490      25280.00 IOPS,    98.75 MiB/s
[2024-12-09T16:27:30.049Z]     24992.00 IOPS,    97.62 MiB/s
[2024-12-09T16:27:30.988Z]     24746.67 IOPS,    96.67 MiB/s
[2024-12-09T16:27:31.926Z]     24288.00 IOPS,    94.88 MiB/s
00:17:02.747                                                                                                  Latency(us)
00:17:02.747  
[2024-12-09T16:27:31.926Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:02.747  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:17:02.748  	 xnvme_bdev          :       5.00   23956.07      93.58       0.00     0.00    2664.23    1283.08    8317.02
00:17:02.748  
[2024-12-09T16:27:31.927Z]  ===================================================================================================================
00:17:02.748  
[2024-12-09T16:27:31.927Z]  Total                       :              23956.07      93.58       0.00     0.00    2664.23    1283.08    8317.02
00:17:03.686   16:27:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:03.686   16:27:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:17:03.686    16:27:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:17:03.686    16:27:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:17:03.686    16:27:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:03.686  {
00:17:03.686    "subsystems": [
00:17:03.686      {
00:17:03.686        "subsystem": "bdev",
00:17:03.686        "config": [
00:17:03.686          {
00:17:03.686            "params": {
00:17:03.686              "io_mechanism": "io_uring",
00:17:03.686              "conserve_cpu": true,
00:17:03.686              "filename": "/dev/nvme0n1",
00:17:03.686              "name": "xnvme_bdev"
00:17:03.686            },
00:17:03.686            "method": "bdev_xnvme_create"
00:17:03.686          },
00:17:03.686          {
00:17:03.686            "method": "bdev_wait_for_examine"
00:17:03.686          }
00:17:03.686        ]
00:17:03.686      }
00:17:03.686    ]
00:17:03.686  }
00:17:03.686  [2024-12-09 16:27:32.821730] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:17:03.686  [2024-12-09 16:27:32.821858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73056 ]
00:17:03.945  [2024-12-09 16:27:32.999657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:03.945  [2024-12-09 16:27:33.106657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:04.514  Running I/O for 5 seconds...
00:17:06.390      23168.00 IOPS,    90.50 MiB/s
[2024-12-09T16:27:36.508Z]     22560.00 IOPS,    88.12 MiB/s
[2024-12-09T16:27:37.888Z]     22634.67 IOPS,    88.42 MiB/s
[2024-12-09T16:27:38.457Z]     22496.00 IOPS,    87.88 MiB/s
[2024-12-09T16:27:38.457Z]     22528.00 IOPS,    88.00 MiB/s
00:17:09.278                                                                                                  Latency(us)
00:17:09.278  
[2024-12-09T16:27:38.457Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:09.278  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:17:09.278  	 xnvme_bdev          :       5.01   22513.54      87.94       0.00     0.00    2834.35    1408.10    8474.94
00:17:09.278  
[2024-12-09T16:27:38.457Z]  ===================================================================================================================
00:17:09.278  
[2024-12-09T16:27:38.457Z]  Total                       :              22513.54      87.94       0.00     0.00    2834.35    1408.10    8474.94
00:17:10.691  
00:17:10.691  real	0m13.561s
00:17:10.691  user	0m7.934s
00:17:10.691  sys	0m5.068s
00:17:10.691   16:27:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:10.691  ************************************
00:17:10.691  END TEST xnvme_bdevperf
00:17:10.691  ************************************
00:17:10.691   16:27:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:10.691   16:27:39 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:17:10.691   16:27:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:10.691   16:27:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:10.691   16:27:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:17:10.691  ************************************
00:17:10.691  START TEST xnvme_fio_plugin
00:17:10.691  ************************************
00:17:10.691   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:17:10.691   16:27:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:17:10.691   16:27:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio
00:17:10.691   16:27:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:10.691   16:27:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:10.691    16:27:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:17:10.691   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:10.691    16:27:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:17:10.691   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:17:10.691    16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:17:10.691   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:17:10.692   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:17:10.692   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:17:10.692   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:17:10.692   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:17:10.692   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:17:10.692    16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:17:10.692    16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:17:10.692    16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:17:10.692   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:17:10.692   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:17:10.692   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:17:10.692   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:17:10.692  {
00:17:10.692    "subsystems": [
00:17:10.692      {
00:17:10.692        "subsystem": "bdev",
00:17:10.692        "config": [
00:17:10.692          {
00:17:10.692            "params": {
00:17:10.692              "io_mechanism": "io_uring",
00:17:10.692              "conserve_cpu": true,
00:17:10.692              "filename": "/dev/nvme0n1",
00:17:10.692              "name": "xnvme_bdev"
00:17:10.692            },
00:17:10.692            "method": "bdev_xnvme_create"
00:17:10.692          },
00:17:10.692          {
00:17:10.692            "method": "bdev_wait_for_examine"
00:17:10.692          }
00:17:10.692        ]
00:17:10.692      }
00:17:10.692    ]
00:17:10.692  }
00:17:10.692   16:27:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:10.692  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:17:10.692  fio-3.35
00:17:10.692  Starting 1 thread
00:17:17.361  
00:17:17.361  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73175: Mon Dec  9 16:27:45 2024
00:17:17.361    read: IOPS=23.3k, BW=91.1MiB/s (95.6MB/s)(456MiB/5001msec)
00:17:17.361      slat (usec): min=2, max=104, avg= 7.45, stdev= 3.35
00:17:17.361      clat (usec): min=1103, max=4979, avg=2445.44, stdev=363.70
00:17:17.361       lat (usec): min=1107, max=5007, avg=2452.89, stdev=365.13
00:17:17.361      clat percentiles (usec):
00:17:17.361       |  1.00th=[ 1352],  5.00th=[ 1696], 10.00th=[ 1991], 20.00th=[ 2212],
00:17:17.361       | 30.00th=[ 2311], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2573],
00:17:17.361       | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 2868], 95.00th=[ 2933],
00:17:17.361       | 99.00th=[ 3064], 99.50th=[ 3130], 99.90th=[ 3326], 99.95th=[ 4359],
00:17:17.361       | 99.99th=[ 4883]
00:17:17.361     bw (  KiB/s): min=84480, max=116502, per=100.00%, avg=93934.11, stdev=9428.86, samples=9
00:17:17.361     iops        : min=21120, max=29125, avg=23483.44, stdev=2357.07, samples=9
00:17:17.361    lat (msec)   : 2=10.26%, 4=89.69%, 10=0.05%
00:17:17.361    cpu          : usr=48.68%, sys=46.78%, ctx=14, majf=0, minf=762
00:17:17.361    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:17:17.361       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:17:17.361       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
00:17:17.361       issued rwts: total=116672,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:17:17.361       latency   : target=0, window=0, percentile=100.00%, depth=64
00:17:17.361  
00:17:17.361  Run status group 0 (all jobs):
00:17:17.361     READ: bw=91.1MiB/s (95.6MB/s), 91.1MiB/s-91.1MiB/s (95.6MB/s-95.6MB/s), io=456MiB (478MB), run=5001-5001msec
00:17:17.930  -----------------------------------------------------
00:17:17.930  Suppressions used:
00:17:17.930    count      bytes template
00:17:17.930        1         11 /usr/src/fio/parse.c
00:17:17.930        1          8 libtcmalloc_minimal.so
00:17:17.930        1        904 libcrypto.so
00:17:17.930  -----------------------------------------------------
00:17:17.930  
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:17.930    16:27:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:17:17.930    16:27:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:17.930    16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:17:17.930    16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:17:17.930    16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:17:17.930    16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:17:17.930   16:27:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:17.930  {
00:17:17.930    "subsystems": [
00:17:17.930      {
00:17:17.930        "subsystem": "bdev",
00:17:17.930        "config": [
00:17:17.930          {
00:17:17.930            "params": {
00:17:17.930              "io_mechanism": "io_uring",
00:17:17.930              "conserve_cpu": true,
00:17:17.930              "filename": "/dev/nvme0n1",
00:17:17.930              "name": "xnvme_bdev"
00:17:17.930            },
00:17:17.930            "method": "bdev_xnvme_create"
00:17:17.930          },
00:17:17.930          {
00:17:17.930            "method": "bdev_wait_for_examine"
00:17:17.930          }
00:17:17.930        ]
00:17:17.930      }
00:17:17.930    ]
00:17:17.930  }
00:17:18.190  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:17:18.190  fio-3.35
00:17:18.190  Starting 1 thread
00:17:24.760  
00:17:24.760  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73267: Mon Dec  9 16:27:52 2024
00:17:24.760    write: IOPS=23.1k, BW=90.4MiB/s (94.8MB/s)(452MiB/5001msec); 0 zone resets
00:17:24.760      slat (usec): min=3, max=140, avg= 7.62, stdev= 3.29
00:17:24.760      clat (usec): min=644, max=7279, avg=2460.53, stdev=346.53
00:17:24.760       lat (usec): min=653, max=7308, avg=2468.15, stdev=347.77
00:17:24.760      clat percentiles (usec):
00:17:24.760       |  1.00th=[ 1713],  5.00th=[ 1876], 10.00th=[ 2008], 20.00th=[ 2180],
00:17:24.760       | 30.00th=[ 2311], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2573],
00:17:24.760       | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 2868], 95.00th=[ 2933],
00:17:24.760       | 99.00th=[ 3064], 99.50th=[ 3130], 99.90th=[ 5866], 99.95th=[ 6783],
00:17:24.760       | 99.99th=[ 7177]
00:17:24.760     bw (  KiB/s): min=84992, max=105472, per=100.00%, avg=92899.56, stdev=6700.70, samples=9
00:17:24.760     iops        : min=21248, max=26368, avg=23224.89, stdev=1675.18, samples=9
00:17:24.760    lat (usec)   : 750=0.01%, 1000=0.01%
00:17:24.760    lat (msec)   : 2=9.86%, 4=90.02%, 10=0.11%
00:17:24.760    cpu          : usr=49.78%, sys=45.94%, ctx=11, majf=0, minf=763
00:17:24.760    IO depths    : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:17:24.760       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:17:24.760       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0%
00:17:24.760       issued rwts: total=0,115725,0,0 short=0,0,0,0 dropped=0,0,0,0
00:17:24.760       latency   : target=0, window=0, percentile=100.00%, depth=64
00:17:24.760  
00:17:24.760  Run status group 0 (all jobs):
00:17:24.760    WRITE: bw=90.4MiB/s (94.8MB/s), 90.4MiB/s-90.4MiB/s (94.8MB/s-94.8MB/s), io=452MiB (474MB), run=5001-5001msec
00:17:25.019  -----------------------------------------------------
00:17:25.019  Suppressions used:
00:17:25.019    count      bytes template
00:17:25.019        1         11 /usr/src/fio/parse.c
00:17:25.019        1          8 libtcmalloc_minimal.so
00:17:25.019        1        904 libcrypto.so
00:17:25.019  -----------------------------------------------------
00:17:25.019  
00:17:25.279  
00:17:25.279  real	0m14.610s
00:17:25.279  user	0m8.640s
00:17:25.279  sys	0m5.286s
00:17:25.279   16:27:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:25.279  ************************************
00:17:25.279  END TEST xnvme_fio_plugin
00:17:25.279  ************************************
00:17:25.279   16:27:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:17:25.279   16:27:54 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}"
00:17:25.279   16:27:54 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd
00:17:25.279   16:27:54 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1
00:17:25.279   16:27:54 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1
00:17:25.279   16:27:54 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev
00:17:25.279   16:27:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:17:25.279   16:27:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false
00:17:25.279   16:27:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false
00:17:25.279   16:27:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:17:25.279   16:27:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:25.279   16:27:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:25.279   16:27:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:17:25.279  ************************************
00:17:25.279  START TEST xnvme_rpc
00:17:25.279  ************************************
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73359
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73359
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73359 ']'
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:25.279  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:25.279   16:27:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:25.279  [2024-12-09 16:27:54.399684] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:17:25.279  [2024-12-09 16:27:54.400252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73359 ]
00:17:25.538  [2024-12-09 16:27:54.581561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:25.538  [2024-12-09 16:27:54.685686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:26.480   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:26.480   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:17:26.480   16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd ''
00:17:26.480   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:26.480   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:26.480  xnvme_bdev
00:17:26.480   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:26.480   16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:26.480   16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]]
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:26.480    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:26.740    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]]
00:17:26.740    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:17:26.740    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:17:26.740    16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:26.740    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:26.740    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:26.740    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]]
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73359
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73359 ']'
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73359
00:17:26.740    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:26.740    16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73359
00:17:26.740  killing process with pid 73359
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73359'
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73359
00:17:26.740   16:27:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73359
00:17:29.277  
00:17:29.277  real	0m3.740s
00:17:29.277  user	0m3.801s
00:17:29.277  sys	0m0.537s
00:17:29.277   16:27:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:29.277  ************************************
00:17:29.277  END TEST xnvme_rpc
00:17:29.277  ************************************
00:17:29.277   16:27:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:29.277   16:27:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:17:29.277   16:27:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:29.277   16:27:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:29.277   16:27:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:17:29.277  ************************************
00:17:29.277  START TEST xnvme_bdevperf
00:17:29.277  ************************************
00:17:29.277   16:27:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:17:29.277   16:27:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:17:29.277   16:27:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd
00:17:29.277   16:27:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:29.277   16:27:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:17:29.277    16:27:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:17:29.277    16:27:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:17:29.277    16:27:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:29.277  {
00:17:29.277    "subsystems": [
00:17:29.277      {
00:17:29.277        "subsystem": "bdev",
00:17:29.277        "config": [
00:17:29.277          {
00:17:29.277            "params": {
00:17:29.277              "io_mechanism": "io_uring_cmd",
00:17:29.277              "conserve_cpu": false,
00:17:29.277              "filename": "/dev/ng0n1",
00:17:29.277              "name": "xnvme_bdev"
00:17:29.277            },
00:17:29.277            "method": "bdev_xnvme_create"
00:17:29.277          },
00:17:29.277          {
00:17:29.277            "method": "bdev_wait_for_examine"
00:17:29.277          }
00:17:29.277        ]
00:17:29.277      }
00:17:29.277    ]
00:17:29.277  }
00:17:29.277  [2024-12-09 16:27:58.199990] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:17:29.277  [2024-12-09 16:27:58.200108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73433 ]
00:17:29.277  [2024-12-09 16:27:58.379279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:29.536  [2024-12-09 16:27:58.484771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:29.796  Running I/O for 5 seconds...
00:17:31.672      25280.00 IOPS,    98.75 MiB/s
[2024-12-09T16:28:02.232Z]     25568.00 IOPS,    99.88 MiB/s
[2024-12-09T16:28:03.170Z]     24640.00 IOPS,    96.25 MiB/s
[2024-12-09T16:28:04.109Z]     24256.00 IOPS,    94.75 MiB/s
[2024-12-09T16:28:04.109Z]     23795.20 IOPS,    92.95 MiB/s
00:17:34.930                                                                                                  Latency(us)
00:17:34.930  
[2024-12-09T16:28:04.109Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:34.930  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:17:34.930  	 xnvme_bdev          :       5.01   23763.54      92.83       0.00     0.00    2685.30    1283.08    8474.94
00:17:34.930  
[2024-12-09T16:28:04.109Z]  ===================================================================================================================
00:17:34.930  
[2024-12-09T16:28:04.109Z]  Total                       :              23763.54      92.83       0.00     0.00    2685.30    1283.08    8474.94
00:17:35.867   16:28:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:35.867   16:28:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:17:35.867    16:28:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:17:35.867    16:28:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:17:35.867    16:28:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:36.125  {
00:17:36.125    "subsystems": [
00:17:36.125      {
00:17:36.125        "subsystem": "bdev",
00:17:36.125        "config": [
00:17:36.125          {
00:17:36.125            "params": {
00:17:36.125              "io_mechanism": "io_uring_cmd",
00:17:36.125              "conserve_cpu": false,
00:17:36.125              "filename": "/dev/ng0n1",
00:17:36.125              "name": "xnvme_bdev"
00:17:36.125            },
00:17:36.125            "method": "bdev_xnvme_create"
00:17:36.125          },
00:17:36.125          {
00:17:36.125            "method": "bdev_wait_for_examine"
00:17:36.125          }
00:17:36.125        ]
00:17:36.125      }
00:17:36.125    ]
00:17:36.125  }
00:17:36.125  [2024-12-09 16:28:05.122757] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:17:36.125  [2024-12-09 16:28:05.123032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73517 ]
00:17:36.383  [2024-12-09 16:28:05.302492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:36.383  [2024-12-09 16:28:05.431983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:36.952  Running I/O for 5 seconds...
00:17:38.829      24832.00 IOPS,    97.00 MiB/s
[2024-12-09T16:28:08.946Z]     23296.00 IOPS,    91.00 MiB/s
[2024-12-09T16:28:09.885Z]     23680.00 IOPS,    92.50 MiB/s
[2024-12-09T16:28:10.823Z]     23408.00 IOPS,    91.44 MiB/s
00:17:41.644                                                                                                  Latency(us)
00:17:41.644  
[2024-12-09T16:28:10.823Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:41.644  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:17:41.644  	 xnvme_bdev          :       5.00   23445.84      91.59       0.00     0.00    2721.22    1131.75    7790.62
00:17:41.644  
[2024-12-09T16:28:10.823Z]  ===================================================================================================================
00:17:41.644  
[2024-12-09T16:28:10.823Z]  Total                       :              23445.84      91.59       0.00     0.00    2721.22    1131.75    7790.62
00:17:43.024   16:28:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:43.024   16:28:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096
00:17:43.024    16:28:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:17:43.024    16:28:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:17:43.024    16:28:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:43.024  {
00:17:43.024    "subsystems": [
00:17:43.024      {
00:17:43.024        "subsystem": "bdev",
00:17:43.024        "config": [
00:17:43.024          {
00:17:43.024            "params": {
00:17:43.024              "io_mechanism": "io_uring_cmd",
00:17:43.024              "conserve_cpu": false,
00:17:43.024              "filename": "/dev/ng0n1",
00:17:43.024              "name": "xnvme_bdev"
00:17:43.024            },
00:17:43.024            "method": "bdev_xnvme_create"
00:17:43.024          },
00:17:43.024          {
00:17:43.024            "method": "bdev_wait_for_examine"
00:17:43.024          }
00:17:43.024        ]
00:17:43.024      }
00:17:43.024    ]
00:17:43.024  }
00:17:43.024  [2024-12-09 16:28:11.983582] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:17:43.024  [2024-12-09 16:28:11.983845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73599 ]
00:17:43.024  [2024-12-09 16:28:12.161273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:43.284  [2024-12-09 16:28:12.266193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:43.543  Running I/O for 5 seconds...
00:17:45.417      73536.00 IOPS,   287.25 MiB/s
[2024-12-09T16:28:16.013Z]     73600.00 IOPS,   287.50 MiB/s
[2024-12-09T16:28:16.581Z]     73514.67 IOPS,   287.17 MiB/s
[2024-12-09T16:28:17.961Z]     73552.00 IOPS,   287.31 MiB/s
[2024-12-09T16:28:17.961Z]     73523.20 IOPS,   287.20 MiB/s
00:17:48.782                                                                                                  Latency(us)
00:17:48.782  
[2024-12-09T16:28:17.961Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:48.782  Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096)
00:17:48.782  	 xnvme_bdev          :       5.00   73508.87     287.14       0.00     0.00     868.02     681.02    3158.36
00:17:48.783  
[2024-12-09T16:28:17.962Z]  ===================================================================================================================
00:17:48.783  
[2024-12-09T16:28:17.962Z]  Total                       :              73508.87     287.14       0.00     0.00     868.02     681.02    3158.36
00:17:49.720   16:28:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:49.720   16:28:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096
00:17:49.720    16:28:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:17:49.720    16:28:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:17:49.720    16:28:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:49.720  {
00:17:49.720    "subsystems": [
00:17:49.720      {
00:17:49.720        "subsystem": "bdev",
00:17:49.720        "config": [
00:17:49.720          {
00:17:49.720            "params": {
00:17:49.720              "io_mechanism": "io_uring_cmd",
00:17:49.720              "conserve_cpu": false,
00:17:49.720              "filename": "/dev/ng0n1",
00:17:49.720              "name": "xnvme_bdev"
00:17:49.720            },
00:17:49.720            "method": "bdev_xnvme_create"
00:17:49.720          },
00:17:49.720          {
00:17:49.720            "method": "bdev_wait_for_examine"
00:17:49.720          }
00:17:49.720        ]
00:17:49.720      }
00:17:49.720    ]
00:17:49.720  }
00:17:49.720  [2024-12-09 16:28:18.730871] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:17:49.720  [2024-12-09 16:28:18.731003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73673 ]
00:17:49.980  [2024-12-09 16:28:18.911293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:49.980  [2024-12-09 16:28:19.018354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:50.238  Running I/O for 5 seconds...
00:17:52.182      49613.00 IOPS,   193.80 MiB/s
[2024-12-09T16:28:22.740Z]     30283.00 IOPS,   118.29 MiB/s
[2024-12-09T16:28:23.677Z]     34426.33 IOPS,   134.48 MiB/s
[2024-12-09T16:28:24.614Z]     38714.50 IOPS,   151.23 MiB/s
00:17:55.435                                                                                                  Latency(us)
00:17:55.435  
[2024-12-09T16:28:24.614Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:55.435  Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096)
00:17:55.435  	 xnvme_bdev          :       5.00   41202.92     160.95       0.00     0.00    1549.84      80.60   39374.24
00:17:55.435  
[2024-12-09T16:28:24.614Z]  ===================================================================================================================
00:17:55.435  
[2024-12-09T16:28:24.614Z]  Total                       :              41202.92     160.95       0.00     0.00    1549.84      80.60   39374.24
00:17:56.374  
00:17:56.374  real	0m27.309s
00:17:56.374  user	0m14.001s
00:17:56.374  sys	0m12.860s
00:17:56.374   16:28:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:56.374   16:28:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:56.374  ************************************
00:17:56.374  END TEST xnvme_bdevperf
00:17:56.374  ************************************
00:17:56.374   16:28:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:17:56.374   16:28:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:56.374   16:28:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:56.374   16:28:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:17:56.374  ************************************
00:17:56.374  START TEST xnvme_fio_plugin
00:17:56.374  ************************************
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:56.374    16:28:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:17:56.374    16:28:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:56.374    16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:17:56.374    16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:17:56.374    16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:17:56.374    16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:17:56.374  {
00:17:56.374    "subsystems": [
00:17:56.374      {
00:17:56.374        "subsystem": "bdev",
00:17:56.374        "config": [
00:17:56.374          {
00:17:56.374            "params": {
00:17:56.374              "io_mechanism": "io_uring_cmd",
00:17:56.374              "conserve_cpu": false,
00:17:56.374              "filename": "/dev/ng0n1",
00:17:56.374              "name": "xnvme_bdev"
00:17:56.374            },
00:17:56.374            "method": "bdev_xnvme_create"
00:17:56.374          },
00:17:56.374          {
00:17:56.374            "method": "bdev_wait_for_examine"
00:17:56.374          }
00:17:56.374        ]
00:17:56.374      }
00:17:56.374    ]
00:17:56.374  }
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:17:56.374   16:28:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:56.633  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:17:56.633  fio-3.35
00:17:56.633  Starting 1 thread
00:18:03.205  
00:18:03.205  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73803: Mon Dec  9 16:28:31 2024
00:18:03.205    read: IOPS=27.2k, BW=106MiB/s (111MB/s)(531MiB/5001msec)
00:18:03.205      slat (nsec): min=2151, max=88235, avg=6464.50, stdev=3832.31
00:18:03.205      clat (usec): min=960, max=4295, avg=2096.68, stdev=536.33
00:18:03.205       lat (usec): min=963, max=4303, avg=2103.15, stdev=538.73
00:18:03.205      clat percentiles (usec):
00:18:03.205       |  1.00th=[ 1106],  5.00th=[ 1221], 10.00th=[ 1319], 20.00th=[ 1483],
00:18:03.205       | 30.00th=[ 1680], 40.00th=[ 2008], 50.00th=[ 2245], 60.00th=[ 2376],
00:18:03.205       | 70.00th=[ 2474], 80.00th=[ 2606], 90.00th=[ 2737], 95.00th=[ 2802],
00:18:03.205       | 99.00th=[ 2933], 99.50th=[ 2966], 99.90th=[ 3228], 99.95th=[ 3785],
00:18:03.205       | 99.99th=[ 4228]
00:18:03.205     bw (  KiB/s): min=87727, max=156160, per=98.41%, avg=106902.44, stdev=23482.98, samples=9
00:18:03.205     iops        : min=21931, max=39040, avg=26725.44, stdev=5870.78, samples=9
00:18:03.205    lat (usec)   : 1000=0.01%
00:18:03.205    lat (msec)   : 2=39.78%, 4=60.17%, 10=0.05%
00:18:03.205    cpu          : usr=36.62%, sys=62.00%, ctx=9, majf=0, minf=762
00:18:03.205    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:18:03.205       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:03.205       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
00:18:03.205       issued rwts: total=135808,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:03.205       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:03.205  
00:18:03.205  Run status group 0 (all jobs):
00:18:03.205     READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=531MiB (556MB), run=5001-5001msec
00:18:03.773  -----------------------------------------------------
00:18:03.773  Suppressions used:
00:18:03.773    count      bytes template
00:18:03.773        1         11 /usr/src/fio/parse.c
00:18:03.773        1          8 libtcmalloc_minimal.so
00:18:03.773        1        904 libcrypto.so
00:18:03.773  -----------------------------------------------------
00:18:03.773  
00:18:03.773   16:28:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:03.773   16:28:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:03.773    16:28:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:18:03.773    16:28:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:18:03.773    16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:18:03.773   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:03.773   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:18:03.773   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:18:03.773   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:18:03.773   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:03.773   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:18:03.773   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:18:03.773   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:18:03.773    16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:03.773    16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:18:03.774    16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:18:03.774  {
00:18:03.774    "subsystems": [
00:18:03.774      {
00:18:03.774        "subsystem": "bdev",
00:18:03.774        "config": [
00:18:03.774          {
00:18:03.774            "params": {
00:18:03.774              "io_mechanism": "io_uring_cmd",
00:18:03.774              "conserve_cpu": false,
00:18:03.774              "filename": "/dev/ng0n1",
00:18:03.774              "name": "xnvme_bdev"
00:18:03.774            },
00:18:03.774            "method": "bdev_xnvme_create"
00:18:03.774          },
00:18:03.774          {
00:18:03.774            "method": "bdev_wait_for_examine"
00:18:03.774          }
00:18:03.774        ]
00:18:03.774      }
00:18:03.774    ]
00:18:03.774  }
00:18:03.774   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:18:03.774   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:18:03.774   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:18:03.774   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:18:03.774   16:28:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:04.033  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:18:04.033  fio-3.35
00:18:04.033  Starting 1 thread
00:18:10.608  
00:18:10.608  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73902: Mon Dec  9 16:28:38 2024
00:18:10.608    write: IOPS=24.9k, BW=97.3MiB/s (102MB/s)(486MiB/5001msec); 0 zone resets
00:18:10.608      slat (usec): min=2, max=188, avg= 7.68, stdev= 4.20
00:18:10.608      clat (usec): min=524, max=7570, avg=2262.22, stdev=547.10
00:18:10.608       lat (usec): min=528, max=7599, avg=2269.90, stdev=549.27
00:18:10.608      clat percentiles (usec):
00:18:10.608       |  1.00th=[ 1045],  5.00th=[ 1205], 10.00th=[ 1336], 20.00th=[ 1696],
00:18:10.608       | 30.00th=[ 2180], 40.00th=[ 2311], 50.00th=[ 2409], 60.00th=[ 2507],
00:18:10.608       | 70.00th=[ 2606], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2868],
00:18:10.608       | 99.00th=[ 2999], 99.50th=[ 3097], 99.90th=[ 3621], 99.95th=[ 6915],
00:18:10.608       | 99.99th=[ 7439]
00:18:10.608     bw (  KiB/s): min=84480, max=149256, per=100.00%, avg=101146.67, stdev=21996.91, samples=9
00:18:10.608     iops        : min=21120, max=37314, avg=25286.67, stdev=5499.23, samples=9
00:18:10.608    lat (usec)   : 750=0.14%, 1000=0.54%
00:18:10.608    lat (msec)   : 2=25.15%, 4=74.13%, 10=0.05%
00:18:10.608    cpu          : usr=40.52%, sys=57.84%, ctx=54, majf=0, minf=763
00:18:10.608    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.8%, 32=50.3%, >=64=1.6%
00:18:10.608       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:10.608       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0%
00:18:10.608       issued rwts: total=0,124542,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:10.608       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:10.608  
00:18:10.608  Run status group 0 (all jobs):
00:18:10.608    WRITE: bw=97.3MiB/s (102MB/s), 97.3MiB/s-97.3MiB/s (102MB/s-102MB/s), io=486MiB (510MB), run=5001-5001msec
00:18:11.178  -----------------------------------------------------
00:18:11.178  Suppressions used:
00:18:11.178    count      bytes template
00:18:11.178        1         11 /usr/src/fio/parse.c
00:18:11.178        1          8 libtcmalloc_minimal.so
00:18:11.178        1        904 libcrypto.so
00:18:11.178  -----------------------------------------------------
00:18:11.178  
00:18:11.178  
00:18:11.178  real	0m14.630s
00:18:11.178  user	0m7.655s
00:18:11.178  sys	0m6.580s
00:18:11.178   16:28:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:11.178  ************************************
00:18:11.178  END TEST xnvme_fio_plugin
00:18:11.178  ************************************
00:18:11.178   16:28:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:18:11.178   16:28:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:18:11.178   16:28:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true
00:18:11.178   16:28:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true
00:18:11.178   16:28:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:18:11.178   16:28:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:18:11.178   16:28:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:11.178   16:28:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:11.178  ************************************
00:18:11.178  START TEST xnvme_rpc
00:18:11.178  ************************************
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73987
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73987
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73987 ']'
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:11.178  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:11.178   16:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:11.178  [2024-12-09 16:28:40.305601] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:18:11.178  [2024-12-09 16:28:40.305952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73987 ]
00:18:11.438  [2024-12-09 16:28:40.483881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:11.438  [2024-12-09 16:28:40.587310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:12.377   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:12.377   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:18:12.377   16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c
00:18:12.377   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:12.377   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:12.377  xnvme_bdev
00:18:12.377   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.377   16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.377   16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]]
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.377   16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]]
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:12.377    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:12.637    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]]
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73987
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73987 ']'
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73987
00:18:12.637    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:12.637    16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73987
00:18:12.637  killing process with pid 73987
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73987'
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73987
00:18:12.637   16:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73987
00:18:15.238  
00:18:15.238  real	0m3.683s
00:18:15.238  user	0m3.738s
00:18:15.238  sys	0m0.553s
00:18:15.238  ************************************
00:18:15.238  END TEST xnvme_rpc
00:18:15.238  ************************************
00:18:15.238   16:28:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:15.238   16:28:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:18:15.238   16:28:43 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:18:15.238   16:28:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:18:15.238   16:28:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:15.238   16:28:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:15.238  ************************************
00:18:15.238  START TEST xnvme_bdevperf
00:18:15.238  ************************************
00:18:15.238   16:28:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:18:15.238   16:28:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:18:15.238   16:28:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd
00:18:15.238   16:28:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:15.238   16:28:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:18:15.238    16:28:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:18:15.238    16:28:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:18:15.238    16:28:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:18:15.238  {
00:18:15.238    "subsystems": [
00:18:15.238      {
00:18:15.238        "subsystem": "bdev",
00:18:15.238        "config": [
00:18:15.238          {
00:18:15.238            "params": {
00:18:15.238              "io_mechanism": "io_uring_cmd",
00:18:15.238              "conserve_cpu": true,
00:18:15.238              "filename": "/dev/ng0n1",
00:18:15.238              "name": "xnvme_bdev"
00:18:15.238            },
00:18:15.238            "method": "bdev_xnvme_create"
00:18:15.238          },
00:18:15.238          {
00:18:15.238            "method": "bdev_wait_for_examine"
00:18:15.238          }
00:18:15.238        ]
00:18:15.238      }
00:18:15.238    ]
00:18:15.238  }
00:18:15.238  [2024-12-09 16:28:44.052314] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:18:15.238  [2024-12-09 16:28:44.052442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74068 ]
00:18:15.238  [2024-12-09 16:28:44.228543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:15.238  [2024-12-09 16:28:44.331303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:15.567  Running I/O for 5 seconds...
00:18:17.912      30080.00 IOPS,   117.50 MiB/s
[2024-12-09T16:28:48.029Z]     27456.00 IOPS,   107.25 MiB/s
[2024-12-09T16:28:48.968Z]     26048.00 IOPS,   101.75 MiB/s
[2024-12-09T16:28:49.907Z]     25248.00 IOPS,    98.62 MiB/s
[2024-12-09T16:28:49.907Z]     25190.40 IOPS,    98.40 MiB/s
00:18:20.728                                                                                                  Latency(us)
00:18:20.728  
[2024-12-09T16:28:49.907Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:20.728  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:18:20.728  	 xnvme_bdev          :       5.01   25147.27      98.23       0.00     0.00    2537.16     842.23    8632.85
00:18:20.728  
[2024-12-09T16:28:49.907Z]  ===================================================================================================================
00:18:20.728  
[2024-12-09T16:28:49.907Z]  Total                       :              25147.27      98.23       0.00     0.00    2537.16     842.23    8632.85
00:18:21.665   16:28:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:21.665   16:28:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:18:21.665    16:28:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:18:21.665    16:28:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:18:21.665    16:28:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:18:21.665  {
00:18:21.665    "subsystems": [
00:18:21.665      {
00:18:21.665        "subsystem": "bdev",
00:18:21.665        "config": [
00:18:21.665          {
00:18:21.665            "params": {
00:18:21.665              "io_mechanism": "io_uring_cmd",
00:18:21.665              "conserve_cpu": true,
00:18:21.665              "filename": "/dev/ng0n1",
00:18:21.665              "name": "xnvme_bdev"
00:18:21.665            },
00:18:21.665            "method": "bdev_xnvme_create"
00:18:21.665          },
00:18:21.665          {
00:18:21.665            "method": "bdev_wait_for_examine"
00:18:21.665          }
00:18:21.665        ]
00:18:21.665      }
00:18:21.666    ]
00:18:21.666  }
00:18:21.925  [2024-12-09 16:28:50.847085] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:18:21.925  [2024-12-09 16:28:50.847212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74142 ]
00:18:21.925  [2024-12-09 16:28:51.028511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:22.184  [2024-12-09 16:28:51.139086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:22.443  Running I/O for 5 seconds...
00:18:24.318      25142.00 IOPS,    98.21 MiB/s
[2024-12-09T16:28:54.878Z]     26484.00 IOPS,   103.45 MiB/s
[2024-12-09T16:28:55.818Z]     24994.67 IOPS,    97.64 MiB/s
[2024-12-09T16:28:56.757Z]     24282.00 IOPS,    94.85 MiB/s
00:18:27.578                                                                                                  Latency(us)
00:18:27.578  
[2024-12-09T16:28:56.757Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:27.578  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:18:27.578  	 xnvme_bdev          :       5.01   23769.56      92.85       0.00     0.00    2683.63      70.32    8369.66
00:18:27.578  
[2024-12-09T16:28:56.757Z]  ===================================================================================================================
00:18:27.578  
[2024-12-09T16:28:56.757Z]  Total                       :              23769.56      92.85       0.00     0.00    2683.63      70.32    8369.66
00:18:28.515   16:28:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:28.515   16:28:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096
00:18:28.515    16:28:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:18:28.515    16:28:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:18:28.515    16:28:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:18:28.515  {
00:18:28.515    "subsystems": [
00:18:28.515      {
00:18:28.515        "subsystem": "bdev",
00:18:28.515        "config": [
00:18:28.515          {
00:18:28.515            "params": {
00:18:28.515              "io_mechanism": "io_uring_cmd",
00:18:28.515              "conserve_cpu": true,
00:18:28.515              "filename": "/dev/ng0n1",
00:18:28.515              "name": "xnvme_bdev"
00:18:28.515            },
00:18:28.516            "method": "bdev_xnvme_create"
00:18:28.516          },
00:18:28.516          {
00:18:28.516            "method": "bdev_wait_for_examine"
00:18:28.516          }
00:18:28.516        ]
00:18:28.516      }
00:18:28.516    ]
00:18:28.516  }
00:18:28.516  [2024-12-09 16:28:57.660138] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:18:28.516  [2024-12-09 16:28:57.660248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74223 ]
00:18:28.775  [2024-12-09 16:28:57.839278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:28.775  [2024-12-09 16:28:57.947568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:29.343  Running I/O for 5 seconds...
00:18:31.215      73152.00 IOPS,   285.75 MiB/s
[2024-12-09T16:29:01.332Z]     72672.00 IOPS,   283.88 MiB/s
[2024-12-09T16:29:02.712Z]     72448.00 IOPS,   283.00 MiB/s
[2024-12-09T16:29:03.280Z]     72544.00 IOPS,   283.38 MiB/s
00:18:34.101                                                                                                  Latency(us)
00:18:34.101  
[2024-12-09T16:29:03.280Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:34.101  Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096)
00:18:34.101  	 xnvme_bdev          :       5.00   72585.62     283.54       0.00     0.00     879.11     618.51    3579.48
00:18:34.101  
[2024-12-09T16:29:03.280Z]  ===================================================================================================================
00:18:34.101  
[2024-12-09T16:29:03.280Z]  Total                       :              72585.62     283.54       0.00     0.00     879.11     618.51    3579.48
00:18:35.480   16:29:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:35.480   16:29:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096
00:18:35.480    16:29:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:18:35.480    16:29:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:18:35.480    16:29:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:18:35.480  {
00:18:35.480    "subsystems": [
00:18:35.480      {
00:18:35.480        "subsystem": "bdev",
00:18:35.480        "config": [
00:18:35.480          {
00:18:35.480            "params": {
00:18:35.480              "io_mechanism": "io_uring_cmd",
00:18:35.480              "conserve_cpu": true,
00:18:35.480              "filename": "/dev/ng0n1",
00:18:35.481              "name": "xnvme_bdev"
00:18:35.481            },
00:18:35.481            "method": "bdev_xnvme_create"
00:18:35.481          },
00:18:35.481          {
00:18:35.481            "method": "bdev_wait_for_examine"
00:18:35.481          }
00:18:35.481        ]
00:18:35.481      }
00:18:35.481    ]
00:18:35.481  }
00:18:35.481  [2024-12-09 16:29:04.432846] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:18:35.481  [2024-12-09 16:29:04.433135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74299 ]
00:18:35.481  [2024-12-09 16:29:04.614257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:35.740  [2024-12-09 16:29:04.720557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:35.999  Running I/O for 5 seconds...
00:18:38.312      44792.00 IOPS,   174.97 MiB/s
[2024-12-09T16:29:08.059Z]     52122.00 IOPS,   203.60 MiB/s
[2024-12-09T16:29:09.059Z]     49154.33 IOPS,   192.01 MiB/s
[2024-12-09T16:29:10.434Z]     47801.25 IOPS,   186.72 MiB/s
[2024-12-09T16:29:10.434Z]     49297.40 IOPS,   192.57 MiB/s
00:18:41.255                                                                                                  Latency(us)
00:18:41.255  
[2024-12-09T16:29:10.434Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:41.255  Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096)
00:18:41.255  	 xnvme_bdev          :       5.00   49261.99     192.43       0.00     0.00    1293.47      74.85   12686.09
00:18:41.255  
[2024-12-09T16:29:10.434Z]  ===================================================================================================================
00:18:41.255  
[2024-12-09T16:29:10.434Z]  Total                       :              49261.99     192.43       0.00     0.00    1293.47      74.85   12686.09
00:18:42.193  
00:18:42.193  real	0m27.176s
00:18:42.193  user	0m16.684s
00:18:42.193  sys	0m8.430s
00:18:42.193   16:29:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:42.193  ************************************
00:18:42.193   16:29:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:18:42.193  END TEST xnvme_bdevperf
00:18:42.193  ************************************
00:18:42.193   16:29:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:18:42.193   16:29:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:18:42.193   16:29:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:42.193   16:29:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:42.193  ************************************
00:18:42.193  START TEST xnvme_fio_plugin
00:18:42.193  ************************************
00:18:42.193   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:42.194    16:29:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:18:42.194    16:29:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:18:42.194    16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:18:42.194    16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:42.194    16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:18:42.194    16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:18:42.194  {
00:18:42.194    "subsystems": [
00:18:42.194      {
00:18:42.194        "subsystem": "bdev",
00:18:42.194        "config": [
00:18:42.194          {
00:18:42.194            "params": {
00:18:42.194              "io_mechanism": "io_uring_cmd",
00:18:42.194              "conserve_cpu": true,
00:18:42.194              "filename": "/dev/ng0n1",
00:18:42.194              "name": "xnvme_bdev"
00:18:42.194            },
00:18:42.194            "method": "bdev_xnvme_create"
00:18:42.194          },
00:18:42.194          {
00:18:42.194            "method": "bdev_wait_for_examine"
00:18:42.194          }
00:18:42.194        ]
00:18:42.194      }
00:18:42.194    ]
00:18:42.194  }
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:18:42.194   16:29:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:42.453  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:18:42.453  fio-3.35
00:18:42.453  Starting 1 thread
00:18:49.024  
00:18:49.024  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74421: Mon Dec  9 16:29:17 2024
00:18:49.024    read: IOPS=22.6k, BW=88.2MiB/s (92.5MB/s)(441MiB/5002msec)
00:18:49.024      slat (usec): min=2, max=182, avg= 8.26, stdev= 3.83
00:18:49.024      clat (usec): min=930, max=3649, avg=2498.77, stdev=370.72
00:18:49.024       lat (usec): min=933, max=3657, avg=2507.02, stdev=372.20
00:18:49.025      clat percentiles (usec):
00:18:49.025       |  1.00th=[ 1156],  5.00th=[ 1500], 10.00th=[ 2212], 20.00th=[ 2343],
00:18:49.025       | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2638],
00:18:49.025       | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 2900],
00:18:49.025       | 99.00th=[ 2999], 99.50th=[ 2999], 99.90th=[ 3130], 99.95th=[ 3163],
00:18:49.025       | 99.99th=[ 3556]
00:18:49.025     bw (  KiB/s): min=86016, max=108745, per=100.00%, avg=90415.10, stdev=7723.21, samples=10
00:18:49.025     iops        : min=21504, max=27186, avg=22603.90, stdev=1930.68, samples=10
00:18:49.025    lat (usec)   : 1000=0.06%
00:18:49.025    lat (msec)   : 2=7.82%, 4=92.12%
00:18:49.025    cpu          : usr=45.51%, sys=50.61%, ctx=10, majf=0, minf=762
00:18:49.025    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:18:49.025       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:49.025       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0%
00:18:49.025       issued rwts: total=112992,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:49.025       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:49.025  
00:18:49.025  Run status group 0 (all jobs):
00:18:49.025     READ: bw=88.2MiB/s (92.5MB/s), 88.2MiB/s-88.2MiB/s (92.5MB/s-92.5MB/s), io=441MiB (463MB), run=5002-5002msec
00:18:49.594  -----------------------------------------------------
00:18:49.594  Suppressions used:
00:18:49.594    count      bytes template
00:18:49.594        1         11 /usr/src/fio/parse.c
00:18:49.594        1          8 libtcmalloc_minimal.so
00:18:49.594        1        904 libcrypto.so
00:18:49.594  -----------------------------------------------------
00:18:49.594  
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:49.594    16:29:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:49.594    16:29:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:18:49.594    16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:18:49.594    16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:49.594    16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:18:49.594    16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:18:49.594  {
00:18:49.594    "subsystems": [
00:18:49.594      {
00:18:49.594        "subsystem": "bdev",
00:18:49.594        "config": [
00:18:49.594          {
00:18:49.594            "params": {
00:18:49.594              "io_mechanism": "io_uring_cmd",
00:18:49.594              "conserve_cpu": true,
00:18:49.594              "filename": "/dev/ng0n1",
00:18:49.594              "name": "xnvme_bdev"
00:18:49.594            },
00:18:49.594            "method": "bdev_xnvme_create"
00:18:49.594          },
00:18:49.594          {
00:18:49.594            "method": "bdev_wait_for_examine"
00:18:49.594          }
00:18:49.594        ]
00:18:49.594      }
00:18:49.594    ]
00:18:49.594  }
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:18:49.594   16:29:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:49.854  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:18:49.854  fio-3.35
00:18:49.854  Starting 1 thread
00:18:56.427  
00:18:56.427  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74513: Mon Dec  9 16:29:24 2024
00:18:56.427    write: IOPS=22.9k, BW=89.6MiB/s (94.0MB/s)(448MiB/5003msec); 0 zone resets
00:18:56.427      slat (usec): min=2, max=506, avg= 8.64, stdev= 5.95
00:18:56.427      clat (usec): min=69, max=8015, avg=2457.93, stdev=628.31
00:18:56.427       lat (usec): min=72, max=8024, avg=2466.57, stdev=629.13
00:18:56.427      clat percentiles (usec):
00:18:56.427       |  1.00th=[  449],  5.00th=[ 1369], 10.00th=[ 1876], 20.00th=[ 2180],
00:18:56.427       | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2507], 60.00th=[ 2573],
00:18:56.427       | 70.00th=[ 2671], 80.00th=[ 2769], 90.00th=[ 2900], 95.00th=[ 2999],
00:18:56.427       | 99.00th=[ 5014], 99.50th=[ 5669], 99.90th=[ 6652], 99.95th=[ 6915],
00:18:56.427       | 99.99th=[ 7373]
00:18:56.427     bw (  KiB/s): min=84136, max=110136, per=100.00%, avg=91818.00, stdev=8235.98, samples=10
00:18:56.427     iops        : min=21034, max=27534, avg=22954.50, stdev=2058.73, samples=10
00:18:56.427    lat (usec)   : 100=0.02%, 250=0.37%, 500=0.73%, 750=0.39%, 1000=1.12%
00:18:56.427    lat (msec)   : 2=10.24%, 4=85.30%, 10=1.82%
00:18:56.427    cpu          : usr=43.34%, sys=50.82%, ctx=31, majf=0, minf=763
00:18:56.427    IO depths    : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.7%, 16=23.8%, 32=52.4%, >=64=2.0%
00:18:56.427       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:56.427       complete  : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0%
00:18:56.427       issued rwts: total=0,114809,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:56.427       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:56.427  
00:18:56.427  Run status group 0 (all jobs):
00:18:56.427    WRITE: bw=89.6MiB/s (94.0MB/s), 89.6MiB/s-89.6MiB/s (94.0MB/s-94.0MB/s), io=448MiB (470MB), run=5003-5003msec
00:18:56.686  -----------------------------------------------------
00:18:56.686  Suppressions used:
00:18:56.686    count      bytes template
00:18:56.686        1         11 /usr/src/fio/parse.c
00:18:56.686        1          8 libtcmalloc_minimal.so
00:18:56.686        1        904 libcrypto.so
00:18:56.686  -----------------------------------------------------
00:18:56.686  
00:18:56.686  ************************************
00:18:56.686  END TEST xnvme_fio_plugin
00:18:56.686  ************************************
00:18:56.686  
00:18:56.686  real	0m14.629s
00:18:56.686  user	0m8.193s
00:18:56.686  sys	0m5.702s
00:18:56.686   16:29:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:56.686   16:29:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:18:56.945  Process with pid 73987 is not found
00:18:56.945   16:29:25 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73987
00:18:56.945   16:29:25 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73987 ']'
00:18:56.945   16:29:25 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73987
00:18:56.945  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73987) - No such process
00:18:56.945   16:29:25 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73987 is not found'
00:18:56.945  ************************************
00:18:56.945  END TEST nvme_xnvme
00:18:56.945  ************************************
00:18:56.945  
00:18:56.945  real	3m48.264s
00:18:56.945  user	2m3.629s
00:18:56.945  sys	1m27.658s
00:18:56.945   16:29:25 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:56.945   16:29:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:56.945   16:29:25  -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme
00:18:56.945   16:29:25  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:56.945   16:29:25  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:56.945   16:29:25  -- common/autotest_common.sh@10 -- # set +x
00:18:56.945  ************************************
00:18:56.945  START TEST blockdev_xnvme
00:18:56.945  ************************************
00:18:56.945   16:29:25 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme
00:18:56.945  * Looking for test storage...
00:18:57.205  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:18:57.205    16:29:26 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:57.205     16:29:26 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version
00:18:57.205     16:29:26 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:57.205    16:29:26 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-:
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-:
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<'
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@345 -- # : 1
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:57.205     16:29:26 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1
00:18:57.205     16:29:26 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1
00:18:57.205     16:29:26 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:57.205     16:29:26 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1
00:18:57.205     16:29:26 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2
00:18:57.205     16:29:26 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2
00:18:57.205     16:29:26 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:57.205     16:29:26 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:57.205    16:29:26 blockdev_xnvme -- scripts/common.sh@368 -- # return 0
00:18:57.205    16:29:26 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:57.205    16:29:26 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:57.205  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:57.205  		--rc genhtml_branch_coverage=1
00:18:57.205  		--rc genhtml_function_coverage=1
00:18:57.205  		--rc genhtml_legend=1
00:18:57.205  		--rc geninfo_all_blocks=1
00:18:57.205  		--rc geninfo_unexecuted_blocks=1
00:18:57.205  		
00:18:57.205  		'
00:18:57.205    16:29:26 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:57.205  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:57.205  		--rc genhtml_branch_coverage=1
00:18:57.205  		--rc genhtml_function_coverage=1
00:18:57.205  		--rc genhtml_legend=1
00:18:57.205  		--rc geninfo_all_blocks=1
00:18:57.205  		--rc geninfo_unexecuted_blocks=1
00:18:57.205  		
00:18:57.205  		'
00:18:57.205    16:29:26 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:57.206  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:57.206  		--rc genhtml_branch_coverage=1
00:18:57.206  		--rc genhtml_function_coverage=1
00:18:57.206  		--rc genhtml_legend=1
00:18:57.206  		--rc geninfo_all_blocks=1
00:18:57.206  		--rc geninfo_unexecuted_blocks=1
00:18:57.206  		
00:18:57.206  		'
00:18:57.206    16:29:26 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:57.206  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:57.206  		--rc genhtml_branch_coverage=1
00:18:57.206  		--rc genhtml_function_coverage=1
00:18:57.206  		--rc genhtml_legend=1
00:18:57.206  		--rc geninfo_all_blocks=1
00:18:57.206  		--rc geninfo_unexecuted_blocks=1
00:18:57.206  		
00:18:57.206  		'
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:18:57.206    16:29:26 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@20 -- # :
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5
00:18:57.206    16:29:26 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']'
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device=
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek=
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx=
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc=
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']'
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]]
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]]
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74653
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:18:57.206   16:29:26 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74653
00:18:57.206   16:29:26 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74653 ']'
00:18:57.206   16:29:26 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:57.206   16:29:26 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:57.206   16:29:26 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:57.206  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:57.206   16:29:26 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:57.206   16:29:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:57.206  [2024-12-09 16:29:26.356191] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:18:57.206  [2024-12-09 16:29:26.356330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74653 ]
00:18:57.465  [2024-12-09 16:29:26.539511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:57.725  [2024-12-09 16:29:26.643157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:58.663   16:29:27 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:58.663   16:29:27 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0
00:18:58.663   16:29:27 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in
00:18:58.663   16:29:27 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf
00:18:58.663   16:29:27 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring
00:18:58.663   16:29:27 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes
00:18:58.663   16:29:27 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:18:59.232  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:18:59.801  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:18:59.801  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:18:59.801  0000:00:12.0 (1b36 0010): Already using the nvme driver
00:18:59.801  0000:00:13.0 (1b36 0010): Already using the nvme driver
00:18:59.801   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]]
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]]
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]]
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]]
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]]
00:18:59.801   16:29:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:59.801   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:18:59.801   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]]
00:18:59.801   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:18:59.801   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:18:59.801   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:18:59.801   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]]
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]]
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]]
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]]
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]]
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 ))
00:19:00.061   16:29:28 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd
00:19:00.061   16:29:28 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.061   16:29:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:00.062    16:29:28 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c'
00:19:00.062  nvme0n1
00:19:00.062  nvme0n2
00:19:00.062  nvme0n3
00:19:00.062  nvme1n1
00:19:00.062  nvme2n1
00:19:00.062  nvme3n1
00:19:00.062   16:29:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.062   16:29:29 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine
00:19:00.062   16:29:29 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.062   16:29:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:00.062   16:29:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.062   16:29:29 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat
00:19:00.062    16:29:29 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.062    16:29:29 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.062    16:29:29 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.062   16:29:29 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs
00:19:00.062    16:29:29 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:00.062    16:29:29 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)'
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:00.062    16:29:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:00.062   16:29:29 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name
00:19:00.062    16:29:29 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' '  "name": "nvme0n1",' '  "aliases": [' '    "195b0546-450e-4d2b-803b-861795c57bc2"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "195b0546-450e-4d2b-803b-861795c57bc2",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n2",' '  "aliases": [' '    "c6d42bb8-58a8-45b2-9aa6-40169629dd88"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "c6d42bb8-58a8-45b2-9aa6-40169629dd88",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n3",' '  "aliases": [' '    "a78f0316-1be7-4855-b556-13f5cc814650"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "a78f0316-1be7-4855-b556-13f5cc814650",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme1n1",' '  "aliases": [' '    "568a2fa3-2c26-44e9-b406-3df04942711f"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "568a2fa3-2c26-44e9-b406-3df04942711f",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme2n1",' '  "aliases": [' '    "1b7da8b2-b62a-4ff8-af46-19afc26251df"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "1b7da8b2-b62a-4ff8-af46-19afc26251df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme3n1",' '  "aliases": [' '    "4b036fea-14e1-4fa8-be03-efc1f834852f"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "4b036fea-14e1-4fa8-be03-efc1f834852f",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}'
00:19:00.062    16:29:29 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name
00:19:00.322   16:29:29 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}")
00:19:00.322   16:29:29 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1
00:19:00.322   16:29:29 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT
00:19:00.322   16:29:29 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74653
00:19:00.322   16:29:29 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74653 ']'
00:19:00.322   16:29:29 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74653
00:19:00.322    16:29:29 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname
00:19:00.322   16:29:29 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:00.322    16:29:29 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74653
00:19:00.322  killing process with pid 74653
00:19:00.322   16:29:29 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:00.322   16:29:29 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:00.322   16:29:29 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74653'
00:19:00.322   16:29:29 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74653
00:19:00.322   16:29:29 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74653
00:19:02.860   16:29:31 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT
00:19:02.860   16:29:31 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 ''
00:19:02.860   16:29:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:19:02.860   16:29:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:02.860   16:29:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:02.860  ************************************
00:19:02.860  START TEST bdev_hello_world
00:19:02.860  ************************************
00:19:02.860   16:29:31 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 ''
00:19:02.860  [2024-12-09 16:29:31.689079] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:19:02.860  [2024-12-09 16:29:31.689215] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74948 ]
00:19:02.860  [2024-12-09 16:29:31.873702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:02.860  [2024-12-09 16:29:31.977029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:03.429  [2024-12-09 16:29:32.411481] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:19:03.429  [2024-12-09 16:29:32.411708] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1
00:19:03.429  [2024-12-09 16:29:32.411737] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:19:03.429  [2024-12-09 16:29:32.413918] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:19:03.429  [2024-12-09 16:29:32.414279] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:19:03.429  [2024-12-09 16:29:32.414302] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:19:03.429  [2024-12-09 16:29:32.414664] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:19:03.429  
00:19:03.429  [2024-12-09 16:29:32.414694] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:19:04.366  
00:19:04.366  ************************************
00:19:04.366  END TEST bdev_hello_world
00:19:04.366  ************************************
00:19:04.366  real	0m1.893s
00:19:04.366  user	0m1.528s
00:19:04.366  sys	0m0.248s
00:19:04.366   16:29:33 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:04.366   16:29:33 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:19:04.625   16:29:33 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds ''
00:19:04.625   16:29:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:19:04.625   16:29:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:04.625   16:29:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:04.625  ************************************
00:19:04.625  START TEST bdev_bounds
00:19:04.625  ************************************
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74985
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:19:04.625  Process bdevio pid: 74985
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74985'
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74985
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74985 ']'
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:04.625  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:04.625   16:29:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:19:04.625  [2024-12-09 16:29:33.664715] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:19:04.625  [2024-12-09 16:29:33.665285] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74985 ]
00:19:04.885  [2024-12-09 16:29:33.848028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:19:04.885  [2024-12-09 16:29:33.957151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:19:04.885  [2024-12-09 16:29:33.957297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:04.885  [2024-12-09 16:29:33.957340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:19:05.453   16:29:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:05.453   16:29:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:19:05.453   16:29:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:19:05.453  I/O targets:
00:19:05.453    nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB)
00:19:05.453    nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB)
00:19:05.453    nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB)
00:19:05.453    nvme1n1: 262144 blocks of 4096 bytes (1024 MiB)
00:19:05.453    nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB)
00:19:05.453    nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:19:05.453  
00:19:05.453  
00:19:05.453       CUnit - A unit testing framework for C - Version 2.1-3
00:19:05.453       http://cunit.sourceforge.net/
00:19:05.453  
00:19:05.453  
00:19:05.453  Suite: bdevio tests on: nvme3n1
00:19:05.453    Test: blockdev write read block ...passed
00:19:05.453    Test: blockdev write zeroes read block ...passed
00:19:05.453    Test: blockdev write zeroes read no split ...passed
00:19:05.453    Test: blockdev write zeroes read split ...passed
00:19:05.712    Test: blockdev write zeroes read split partial ...passed
00:19:05.712    Test: blockdev reset ...passed
00:19:05.712    Test: blockdev write read 8 blocks ...passed
00:19:05.712    Test: blockdev write read size > 128k ...passed
00:19:05.712    Test: blockdev write read invalid size ...passed
00:19:05.712    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:19:05.712    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:19:05.712    Test: blockdev write read max offset ...passed
00:19:05.712    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:19:05.712    Test: blockdev writev readv 8 blocks ...passed
00:19:05.712    Test: blockdev writev readv 30 x 1block ...passed
00:19:05.712    Test: blockdev writev readv block ...passed
00:19:05.712    Test: blockdev writev readv size > 128k ...passed
00:19:05.712    Test: blockdev writev readv size > 128k in two iovs ...passed
00:19:05.712    Test: blockdev comparev and writev ...passed
00:19:05.712    Test: blockdev nvme passthru rw ...passed
00:19:05.712    Test: blockdev nvme passthru vendor specific ...passed
00:19:05.712    Test: blockdev nvme admin passthru ...passed
00:19:05.712    Test: blockdev copy ...passed
00:19:05.712  Suite: bdevio tests on: nvme2n1
00:19:05.712    Test: blockdev write read block ...passed
00:19:05.712    Test: blockdev write zeroes read block ...passed
00:19:05.712    Test: blockdev write zeroes read no split ...passed
00:19:05.712    Test: blockdev write zeroes read split ...passed
00:19:05.712    Test: blockdev write zeroes read split partial ...passed
00:19:05.712    Test: blockdev reset ...passed
00:19:05.712    Test: blockdev write read 8 blocks ...passed
00:19:05.712    Test: blockdev write read size > 128k ...passed
00:19:05.712    Test: blockdev write read invalid size ...passed
00:19:05.712    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:19:05.712    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:19:05.712    Test: blockdev write read max offset ...passed
00:19:05.712    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:19:05.712    Test: blockdev writev readv 8 blocks ...passed
00:19:05.712    Test: blockdev writev readv 30 x 1block ...passed
00:19:05.712    Test: blockdev writev readv block ...passed
00:19:05.712    Test: blockdev writev readv size > 128k ...passed
00:19:05.712    Test: blockdev writev readv size > 128k in two iovs ...passed
00:19:05.712    Test: blockdev comparev and writev ...passed
00:19:05.712    Test: blockdev nvme passthru rw ...passed
00:19:05.712    Test: blockdev nvme passthru vendor specific ...passed
00:19:05.712    Test: blockdev nvme admin passthru ...passed
00:19:05.712    Test: blockdev copy ...passed
00:19:05.712  Suite: bdevio tests on: nvme1n1
00:19:05.712    Test: blockdev write read block ...passed
00:19:05.712    Test: blockdev write zeroes read block ...passed
00:19:05.712    Test: blockdev write zeroes read no split ...passed
00:19:05.712    Test: blockdev write zeroes read split ...passed
00:19:05.712    Test: blockdev write zeroes read split partial ...passed
00:19:05.712    Test: blockdev reset ...passed
00:19:05.712    Test: blockdev write read 8 blocks ...passed
00:19:05.713    Test: blockdev write read size > 128k ...passed
00:19:05.713    Test: blockdev write read invalid size ...passed
00:19:05.713    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:19:05.713    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:19:05.713    Test: blockdev write read max offset ...passed
00:19:05.713    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:19:05.713    Test: blockdev writev readv 8 blocks ...passed
00:19:05.713    Test: blockdev writev readv 30 x 1block ...passed
00:19:05.713    Test: blockdev writev readv block ...passed
00:19:05.713    Test: blockdev writev readv size > 128k ...passed
00:19:05.713    Test: blockdev writev readv size > 128k in two iovs ...passed
00:19:05.713    Test: blockdev comparev and writev ...passed
00:19:05.713    Test: blockdev nvme passthru rw ...passed
00:19:05.713    Test: blockdev nvme passthru vendor specific ...passed
00:19:05.713    Test: blockdev nvme admin passthru ...passed
00:19:05.713    Test: blockdev copy ...passed
00:19:05.713  Suite: bdevio tests on: nvme0n3
00:19:05.713    Test: blockdev write read block ...passed
00:19:05.713    Test: blockdev write zeroes read block ...passed
00:19:05.713    Test: blockdev write zeroes read no split ...passed
00:19:05.713    Test: blockdev write zeroes read split ...passed
00:19:05.972    Test: blockdev write zeroes read split partial ...passed
00:19:05.972    Test: blockdev reset ...passed
00:19:05.972    Test: blockdev write read 8 blocks ...passed
00:19:05.972    Test: blockdev write read size > 128k ...passed
00:19:05.972    Test: blockdev write read invalid size ...passed
00:19:05.972    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:19:05.972    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:19:05.972    Test: blockdev write read max offset ...passed
00:19:05.972    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:19:05.972    Test: blockdev writev readv 8 blocks ...passed
00:19:05.972    Test: blockdev writev readv 30 x 1block ...passed
00:19:05.972    Test: blockdev writev readv block ...passed
00:19:05.972    Test: blockdev writev readv size > 128k ...passed
00:19:05.972    Test: blockdev writev readv size > 128k in two iovs ...passed
00:19:05.972    Test: blockdev comparev and writev ...passed
00:19:05.972    Test: blockdev nvme passthru rw ...passed
00:19:05.972    Test: blockdev nvme passthru vendor specific ...passed
00:19:05.972    Test: blockdev nvme admin passthru ...passed
00:19:05.972    Test: blockdev copy ...passed
00:19:05.972  Suite: bdevio tests on: nvme0n2
00:19:05.972    Test: blockdev write read block ...passed
00:19:05.972    Test: blockdev write zeroes read block ...passed
00:19:05.972    Test: blockdev write zeroes read no split ...passed
00:19:05.972    Test: blockdev write zeroes read split ...passed
00:19:05.972    Test: blockdev write zeroes read split partial ...passed
00:19:05.972    Test: blockdev reset ...passed
00:19:05.972    Test: blockdev write read 8 blocks ...passed
00:19:05.972    Test: blockdev write read size > 128k ...passed
00:19:05.972    Test: blockdev write read invalid size ...passed
00:19:05.972    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:19:05.972    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:19:05.972    Test: blockdev write read max offset ...passed
00:19:05.972    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:19:05.972    Test: blockdev writev readv 8 blocks ...passed
00:19:05.972    Test: blockdev writev readv 30 x 1block ...passed
00:19:05.972    Test: blockdev writev readv block ...passed
00:19:05.972    Test: blockdev writev readv size > 128k ...passed
00:19:05.972    Test: blockdev writev readv size > 128k in two iovs ...passed
00:19:05.972    Test: blockdev comparev and writev ...passed
00:19:05.972    Test: blockdev nvme passthru rw ...passed
00:19:05.972    Test: blockdev nvme passthru vendor specific ...passed
00:19:05.972    Test: blockdev nvme admin passthru ...passed
00:19:05.972    Test: blockdev copy ...passed
00:19:05.972  Suite: bdevio tests on: nvme0n1
00:19:05.972    Test: blockdev write read block ...passed
00:19:05.972    Test: blockdev write zeroes read block ...passed
00:19:05.972    Test: blockdev write zeroes read no split ...passed
00:19:05.972    Test: blockdev write zeroes read split ...passed
00:19:05.972    Test: blockdev write zeroes read split partial ...passed
00:19:05.972    Test: blockdev reset ...passed
00:19:05.972    Test: blockdev write read 8 blocks ...passed
00:19:05.972    Test: blockdev write read size > 128k ...passed
00:19:05.972    Test: blockdev write read invalid size ...passed
00:19:05.972    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:19:05.972    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:19:05.972    Test: blockdev write read max offset ...passed
00:19:05.972    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:19:05.972    Test: blockdev writev readv 8 blocks ...passed
00:19:05.972    Test: blockdev writev readv 30 x 1block ...passed
00:19:05.972    Test: blockdev writev readv block ...passed
00:19:05.972    Test: blockdev writev readv size > 128k ...passed
00:19:05.972    Test: blockdev writev readv size > 128k in two iovs ...passed
00:19:05.972    Test: blockdev comparev and writev ...passed
00:19:05.972    Test: blockdev nvme passthru rw ...passed
00:19:05.972    Test: blockdev nvme passthru vendor specific ...passed
00:19:05.972    Test: blockdev nvme admin passthru ...passed
00:19:05.972    Test: blockdev copy ...passed
00:19:05.972  
00:19:05.972  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:19:05.972                suites      6      6    n/a      0        0
00:19:05.972                 tests    138    138    138      0        0
00:19:05.972               asserts    780    780    780      0      n/a
00:19:05.972  
00:19:05.972  Elapsed time =    1.343 seconds
00:19:05.972  0
00:19:05.972   16:29:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74985
00:19:05.972   16:29:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74985 ']'
00:19:05.972   16:29:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74985
00:19:05.972    16:29:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:19:05.972   16:29:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:05.972    16:29:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74985
00:19:05.972  killing process with pid 74985
00:19:05.972   16:29:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:05.972   16:29:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:05.972   16:29:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74985'
00:19:05.972   16:29:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74985
00:19:05.972   16:29:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74985
00:19:07.350   16:29:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:19:07.350  
00:19:07.350  real	0m2.648s
00:19:07.350  user	0m6.534s
00:19:07.350  sys	0m0.399s
00:19:07.350   16:29:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:07.350   16:29:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:19:07.350  ************************************
00:19:07.350  END TEST bdev_bounds
00:19:07.350  ************************************
00:19:07.350   16:29:36 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' ''
00:19:07.350   16:29:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:19:07.350   16:29:36 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:07.350   16:29:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:07.350  ************************************
00:19:07.350  START TEST bdev_nbd
00:19:07.350  ************************************
00:19:07.350   16:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' ''
00:19:07.350    16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:19:07.350   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:19:07.350   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:07.350   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:19:07.350   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:19:07.350   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:19:07.350   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6
00:19:07.350   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75046
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75046 /var/tmp/spdk-nbd.sock
00:19:07.351  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 75046 ']'
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:07.351   16:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:19:07.351  [2024-12-09 16:29:36.408077] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:19:07.351  [2024-12-09 16:29:36.408326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:19:07.611  [2024-12-09 16:29:36.592436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:07.611  [2024-12-09 16:29:36.697535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1'
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1'
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:19:08.180   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:19:08.180    16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1
00:19:08.439   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:19:08.439    16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:19:08.439   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:19:08.439   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:19:08.439   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:08.439   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:08.439   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:08.439   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:19:08.439   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:08.440   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:08.440   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:08.440   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:08.440  1+0 records in
00:19:08.440  1+0 records out
00:19:08.440  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629995 s, 6.5 MB/s
00:19:08.440    16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:08.440   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:08.440   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:08.440   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:08.440   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:08.440   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:19:08.440   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:19:08.440    16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:19:08.699    16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:08.699  1+0 records in
00:19:08.699  1+0 records out
00:19:08.699  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705261 s, 5.8 MB/s
00:19:08.699    16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:19:08.699   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:19:08.699    16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3
00:19:08.959   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:19:08.959    16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:19:08.959   16:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:19:08.959   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:19:08.959   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:08.959   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:08.959   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:08.959   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:19:08.959   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:08.959   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:08.959   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:08.959   16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:08.959  1+0 records in
00:19:08.959  1+0 records out
00:19:08.959  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000783466 s, 5.2 MB/s
00:19:08.959    16:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:08.959   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:08.959   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:08.959   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:08.959   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:08.959   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:19:08.959   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:19:08.959    16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:19:09.218    16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:09.218  1+0 records in
00:19:09.218  1+0 records out
00:19:09.218  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720248 s, 5.7 MB/s
00:19:09.218    16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:09.218   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:09.219   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:09.219   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:09.219   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:09.219   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:19:09.219   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:19:09.219    16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:19:09.478    16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:09.478  1+0 records in
00:19:09.478  1+0 records out
00:19:09.478  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0035704 s, 1.1 MB/s
00:19:09.478    16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:19:09.478   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:19:09.478    16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:19:09.738    16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:09.738  1+0 records in
00:19:09.738  1+0 records out
00:19:09.738  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012717 s, 3.2 MB/s
00:19:09.738    16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:19:09.738   16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:19:09.738    16:29:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:19:09.997   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd0",
00:19:09.997      "bdev_name": "nvme0n1"
00:19:09.997    },
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd1",
00:19:09.997      "bdev_name": "nvme0n2"
00:19:09.997    },
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd2",
00:19:09.997      "bdev_name": "nvme0n3"
00:19:09.997    },
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd3",
00:19:09.997      "bdev_name": "nvme1n1"
00:19:09.997    },
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd4",
00:19:09.997      "bdev_name": "nvme2n1"
00:19:09.997    },
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd5",
00:19:09.997      "bdev_name": "nvme3n1"
00:19:09.997    }
00:19:09.997  ]'
00:19:09.997   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:19:09.997    16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:19:09.997    16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd0",
00:19:09.997      "bdev_name": "nvme0n1"
00:19:09.997    },
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd1",
00:19:09.997      "bdev_name": "nvme0n2"
00:19:09.997    },
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd2",
00:19:09.997      "bdev_name": "nvme0n3"
00:19:09.997    },
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd3",
00:19:09.997      "bdev_name": "nvme1n1"
00:19:09.997    },
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd4",
00:19:09.997      "bdev_name": "nvme2n1"
00:19:09.997    },
00:19:09.997    {
00:19:09.997      "nbd_device": "/dev/nbd5",
00:19:09.997      "bdev_name": "nvme3n1"
00:19:09.997    }
00:19:09.997  ]'
00:19:09.997   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5'
00:19:09.997   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:09.997   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5')
00:19:09.997   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:09.997   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:19:09.997   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:09.997   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:19:10.257    16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:19:10.257   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:19:10.257   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:19:10.257   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:10.257   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:10.257   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:19:10.257   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:10.257   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:10.257   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:10.257   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:19:10.516    16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:19:10.516   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:19:10.516   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:19:10.516   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:10.516   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:10.516   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:19:10.516   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:10.516   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:10.516   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:10.516   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:19:10.776    16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:19:10.776   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:19:10.776   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:19:10.776   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:10.776   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:10.776   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:19:10.776   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:10.776   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:10.776   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:10.776   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:19:11.035    16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:19:11.035   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:19:11.035   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:19:11.035   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:11.035   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:11.035   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:19:11.035   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:11.035   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:11.035   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:11.035   16:29:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:19:11.035    16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:19:11.294    16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:11.294   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:11.294    16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:19:11.294    16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:11.294     16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:19:11.555    16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:19:11.555     16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:19:11.555     16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:19:11.555    16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:19:11.555     16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:19:11.555     16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:19:11.555     16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:19:11.555    16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:19:11.555    16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:19:11.555   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0
00:19:11.815  /dev/nbd0
00:19:11.815    16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:11.815  1+0 records in
00:19:11.815  1+0 records out
00:19:11.815  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725156 s, 5.6 MB/s
00:19:11.815    16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:19:11.815   16:29:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1
00:19:12.075  /dev/nbd1
00:19:12.075    16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:12.075  1+0 records in
00:19:12.075  1+0 records out
00:19:12.075  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730617 s, 5.6 MB/s
00:19:12.075    16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:19:12.075   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10
00:19:12.334  /dev/nbd10
00:19:12.334    16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:19:12.334   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:19:12.334   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:19:12.334   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:12.334   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:12.334   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:12.334   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:19:12.334   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:12.334   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:12.335   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:12.335   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:12.335  1+0 records in
00:19:12.335  1+0 records out
00:19:12.335  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715124 s, 5.7 MB/s
00:19:12.335    16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:12.335   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:12.335   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:12.335   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:12.335   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:12.335   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:12.335   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:19:12.335   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11
00:19:12.594  /dev/nbd11
00:19:12.594    16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:12.594  1+0 records in
00:19:12.594  1+0 records out
00:19:12.594  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631533 s, 6.5 MB/s
00:19:12.594    16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:19:12.594   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12
00:19:12.854  /dev/nbd12
00:19:12.854    16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:12.854  1+0 records in
00:19:12.854  1+0 records out
00:19:12.854  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0009035 s, 4.5 MB/s
00:19:12.854    16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:19:12.854   16:29:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13
00:19:13.113  /dev/nbd13
00:19:13.113    16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:13.113  1+0 records in
00:19:13.113  1+0 records out
00:19:13.113  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820883 s, 5.0 MB/s
00:19:13.113    16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:13.113   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:19:13.113    16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:19:13.113    16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:13.114     16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:19:13.373    16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd0",
00:19:13.373      "bdev_name": "nvme0n1"
00:19:13.373    },
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd1",
00:19:13.373      "bdev_name": "nvme0n2"
00:19:13.373    },
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd10",
00:19:13.373      "bdev_name": "nvme0n3"
00:19:13.373    },
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd11",
00:19:13.373      "bdev_name": "nvme1n1"
00:19:13.373    },
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd12",
00:19:13.373      "bdev_name": "nvme2n1"
00:19:13.373    },
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd13",
00:19:13.373      "bdev_name": "nvme3n1"
00:19:13.373    }
00:19:13.373  ]'
00:19:13.373     16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd0",
00:19:13.373      "bdev_name": "nvme0n1"
00:19:13.373    },
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd1",
00:19:13.373      "bdev_name": "nvme0n2"
00:19:13.373    },
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd10",
00:19:13.373      "bdev_name": "nvme0n3"
00:19:13.373    },
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd11",
00:19:13.373      "bdev_name": "nvme1n1"
00:19:13.373    },
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd12",
00:19:13.373      "bdev_name": "nvme2n1"
00:19:13.373    },
00:19:13.373    {
00:19:13.373      "nbd_device": "/dev/nbd13",
00:19:13.373      "bdev_name": "nvme3n1"
00:19:13.373    }
00:19:13.373  ]'
00:19:13.373     16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:19:13.373    16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:19:13.373  /dev/nbd1
00:19:13.373  /dev/nbd10
00:19:13.373  /dev/nbd11
00:19:13.373  /dev/nbd12
00:19:13.373  /dev/nbd13'
00:19:13.373     16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:19:13.373  /dev/nbd1
00:19:13.373  /dev/nbd10
00:19:13.373  /dev/nbd11
00:19:13.373  /dev/nbd12
00:19:13.373  /dev/nbd13'
00:19:13.373     16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:19:13.373    16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6
00:19:13.373    16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6
00:19:13.373   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6
00:19:13.373   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']'
00:19:13.373   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write
00:19:13.373   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:19:13.373   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:19:13.373   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:19:13.373   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:19:13.373   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:19:13.373   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:19:13.373  256+0 records in
00:19:13.373  256+0 records out
00:19:13.373  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00625148 s, 168 MB/s
00:19:13.374   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:19:13.374   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:19:13.633  256+0 records in
00:19:13.633  256+0 records out
00:19:13.633  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125615 s, 8.3 MB/s
00:19:13.633   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:19:13.633   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:19:13.633  256+0 records in
00:19:13.633  256+0 records out
00:19:13.633  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130403 s, 8.0 MB/s
00:19:13.633   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:19:13.633   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:19:13.893  256+0 records in
00:19:13.893  256+0 records out
00:19:13.893  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129019 s, 8.1 MB/s
00:19:13.893   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:19:13.893   16:29:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:19:13.893  256+0 records in
00:19:13.893  256+0 records out
00:19:13.893  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129509 s, 8.1 MB/s
00:19:13.893   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:19:13.893   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:19:14.152  256+0 records in
00:19:14.152  256+0 records out
00:19:14.152  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163884 s, 6.4 MB/s
00:19:14.152   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:19:14.152   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:19:14.411  256+0 records in
00:19:14.411  256+0 records out
00:19:14.411  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128594 s, 8.2 MB/s
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:14.411   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:19:14.412   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:14.412   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:19:14.412   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:14.412   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:19:14.671    16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:19:14.671   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:19:14.671   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:19:14.671   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:14.671   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:14.671   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:19:14.671   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:14.671   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:14.671   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:14.671   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:19:14.931    16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:19:14.931   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:19:14.931   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:19:14.931   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:14.931   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:14.931   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:19:14.931   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:14.931   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:14.931   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:14.931   16:29:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:19:14.931    16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:19:14.931   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:19:14.931   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:19:14.931   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:14.931   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:14.931   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:19:14.931   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:14.931   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:14.931   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:14.931   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:19:15.190    16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:19:15.190   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:19:15.190   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:19:15.190   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:15.190   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:15.190   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:19:15.190   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:15.190   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:15.190   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:15.190   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:19:15.450    16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:19:15.450   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:19:15.450   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:19:15.450   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:15.450   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:15.450   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:19:15.450   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:15.450   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:15.450   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:15.450   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:19:15.709    16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:19:15.709   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:19:15.709   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:19:15.709   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:15.709   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:15.709   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:19:15.709   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:15.709   16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:15.709    16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:19:15.709    16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:15.709     16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:19:15.968    16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:19:15.969     16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:19:15.969     16:29:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:19:15.969    16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:19:15.969     16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:19:15.969     16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:19:15.969     16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:19:15.969    16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:19:15.969    16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:19:15.969   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:19:15.969   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:19:15.969   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:19:15.969   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:19:15.969   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:15.969   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:19:15.969   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:19:16.227  malloc_lvol_verify
00:19:16.227   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:19:16.486  84e09a44-1ceb-43ef-b919-94d896fb35d4
00:19:16.486   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:19:16.486  17326cd6-ffa9-40cc-ad6e-34f9b294fbb3
00:19:16.486   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:19:16.745  /dev/nbd0
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:19:16.745  mke2fs 1.47.0 (5-Feb-2023)
00:19:16.745  Discarding device blocks:    0/4096         done                            
00:19:16.745  Creating filesystem with 4096 1k blocks and 1024 inodes
00:19:16.745  
00:19:16.745  Allocating group tables: 0/1   done                            
00:19:16.745  Writing inode tables: 0/1   done                            
00:19:16.745  Creating journal (1024 blocks): done
00:19:16.745  Writing superblocks and filesystem accounting information: 0/1   done
00:19:16.745  
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:16.745   16:29:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:19:17.005    16:29:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75046
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 75046 ']'
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 75046
00:19:17.005    16:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:17.005    16:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75046
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:17.005  killing process with pid 75046
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75046'
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 75046
00:19:17.005   16:29:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 75046
00:19:18.475   16:29:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:19:18.475  
00:19:18.475  real	0m10.985s
00:19:18.475  user	0m13.930s
00:19:18.475  sys	0m4.855s
00:19:18.475   16:29:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:18.475   16:29:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:19:18.475  ************************************
00:19:18.475  END TEST bdev_nbd
00:19:18.475  ************************************
00:19:18.475   16:29:47 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]]
00:19:18.475   16:29:47 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']'
00:19:18.475   16:29:47 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']'
00:19:18.475   16:29:47 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite ''
00:19:18.475   16:29:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:19:18.475   16:29:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:18.475   16:29:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:18.475  ************************************
00:19:18.475  START TEST bdev_fio
00:19:18.475  ************************************
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite ''
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:19:18.475  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:19:18.475    16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo ''
00:19:18.475    16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=//
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context=
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']'
00:19:18.475    16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']'
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:19:18.475  ************************************
00:19:18.475  START TEST bdev_fio_rw_verify
00:19:18.475  ************************************
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib=
00:19:18.475   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:19:18.475    16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:19:18.475    16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan
00:19:18.476    16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:19:18.476   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:19:18.476   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:19:18.476   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break
00:19:18.476   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:19:18.476   16:29:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:19:18.735  job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:19:18.735  job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:19:18.735  job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:19:18.735  job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:19:18.735  job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:19:18.735  job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:19:18.735  fio-3.35
00:19:18.735  Starting 6 threads
00:19:30.952  
00:19:30.952  job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75454: Mon Dec  9 16:29:58 2024
00:19:30.952    read: IOPS=34.2k, BW=134MiB/s (140MB/s)(1337MiB/10001msec)
00:19:30.952      slat (usec): min=2, max=888, avg= 7.27, stdev= 6.18
00:19:30.952      clat (usec): min=76, max=6667, avg=526.93, stdev=224.39
00:19:30.952       lat (usec): min=88, max=6681, avg=534.20, stdev=225.42
00:19:30.952      clat percentiles (usec):
00:19:30.952       | 50.000th=[  529], 99.000th=[ 1106], 99.900th=[ 1631], 99.990th=[ 3752],
00:19:30.952       | 99.999th=[ 6652]
00:19:30.952    write: IOPS=34.5k, BW=135MiB/s (141MB/s)(1346MiB/10001msec); 0 zone resets
00:19:30.952      slat (usec): min=10, max=2964, avg=24.89, stdev=33.69
00:19:30.952      clat (usec): min=81, max=4988, avg=628.95, stdev=239.22
00:19:30.952       lat (usec): min=95, max=5042, avg=653.83, stdev=244.17
00:19:30.952      clat percentiles (usec):
00:19:30.952       | 50.000th=[  627], 99.000th=[ 1352], 99.900th=[ 1991], 99.990th=[ 2606],
00:19:30.952       | 99.999th=[ 4293]
00:19:30.952     bw (  KiB/s): min=107567, max=176146, per=100.00%, avg=138551.26, stdev=2915.96, samples=114
00:19:30.952     iops        : min=26891, max=44036, avg=34637.47, stdev=728.98, samples=114
00:19:30.952    lat (usec)   : 100=0.01%, 250=7.55%, 500=29.62%, 750=43.84%, 1000=15.11%
00:19:30.952    lat (msec)   : 2=3.79%, 4=0.08%, 10=0.01%
00:19:30.952    cpu          : usr=56.45%, sys=28.60%, ctx=8235, majf=0, minf=28219
00:19:30.952    IO depths    : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0%
00:19:30.952       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:30.952       complete  : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:30.952       issued rwts: total=342177,344597,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:30.952       latency   : target=0, window=0, percentile=100.00%, depth=8
00:19:30.952  
00:19:30.952  Run status group 0 (all jobs):
00:19:30.952     READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=1337MiB (1402MB), run=10001-10001msec
00:19:30.952    WRITE: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=1346MiB (1411MB), run=10001-10001msec
00:19:30.952  -----------------------------------------------------
00:19:30.952  Suppressions used:
00:19:30.952    count      bytes template
00:19:30.952        6         48 /usr/src/fio/parse.c
00:19:30.952     2190     210240 /usr/src/fio/iolog.c
00:19:30.952        1          8 libtcmalloc_minimal.so
00:19:30.952        1        904 libcrypto.so
00:19:30.952  -----------------------------------------------------
00:19:30.952  
00:19:30.952  
00:19:30.952  real	0m12.466s
00:19:30.952  user	0m35.849s
00:19:30.952  sys	0m17.563s
00:19:30.952   16:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:30.952  ************************************
00:19:30.952  END TEST bdev_fio_rw_verify
00:19:30.952  ************************************
00:19:30.952   16:29:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x
00:19:30.952   16:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f
00:19:30.952   16:29:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']'
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']'
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']'
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite
00:19:30.952    16:30:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:19:30.952    16:30:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' '  "name": "nvme0n1",' '  "aliases": [' '    "195b0546-450e-4d2b-803b-861795c57bc2"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "195b0546-450e-4d2b-803b-861795c57bc2",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n2",' '  "aliases": [' '    "c6d42bb8-58a8-45b2-9aa6-40169629dd88"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "c6d42bb8-58a8-45b2-9aa6-40169629dd88",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n3",' '  "aliases": [' '    "a78f0316-1be7-4855-b556-13f5cc814650"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "a78f0316-1be7-4855-b556-13f5cc814650",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme1n1",' '  "aliases": [' '    "568a2fa3-2c26-44e9-b406-3df04942711f"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "568a2fa3-2c26-44e9-b406-3df04942711f",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme2n1",' '  "aliases": [' '    "1b7da8b2-b62a-4ff8-af46-19afc26251df"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "1b7da8b2-b62a-4ff8-af46-19afc26251df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme3n1",' '  "aliases": [' '    "4b036fea-14e1-4fa8-be03-efc1f834852f"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "4b036fea-14e1-4fa8-be03-efc1f834852f",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}'
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]]
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:19:30.952  /home/vagrant/spdk_repo/spdk
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT
00:19:30.952   16:30:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0
00:19:30.952  
00:19:30.952  real	0m12.705s
00:19:30.952  user	0m35.960s
00:19:30.953  sys	0m17.695s
00:19:30.953   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:30.953  ************************************
00:19:30.953   16:30:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:19:30.953  END TEST bdev_fio
00:19:30.953  ************************************
00:19:30.953   16:30:00 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT
00:19:30.953   16:30:00 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:19:30.953   16:30:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:19:30.953   16:30:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:31.212   16:30:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:31.212  ************************************
00:19:31.212  START TEST bdev_verify
00:19:31.212  ************************************
00:19:31.212   16:30:00 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:19:31.212  [2024-12-09 16:30:00.244846] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:19:31.212  [2024-12-09 16:30:00.244984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75629 ]
00:19:31.471  [2024-12-09 16:30:00.432937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:19:31.471  [2024-12-09 16:30:00.547516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:31.471  [2024-12-09 16:30:00.547548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:19:32.040  Running I/O for 5 seconds...
00:19:33.986      21182.00 IOPS,    82.74 MiB/s
[2024-12-09T16:30:04.545Z]     21072.00 IOPS,    82.31 MiB/s
[2024-12-09T16:30:05.481Z]     21844.67 IOPS,    85.33 MiB/s
[2024-12-09T16:30:06.417Z]     21888.00 IOPS,    85.50 MiB/s
[2024-12-09T16:30:06.417Z]     22092.40 IOPS,    86.30 MiB/s
00:19:37.238                                                                                                  Latency(us)
00:19:37.238  
[2024-12-09T16:30:06.417Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:37.238  Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0x0 length 0x80000
00:19:37.238  	 nvme0n1             :       5.06    1693.95       6.62       0.00     0.00   75447.09   14107.35   72852.87
00:19:37.238  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0x80000 length 0x80000
00:19:37.238  	 nvme0n1             :       5.05    1699.09       6.64       0.00     0.00   75218.58   10738.43   75800.67
00:19:37.238  Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0x0 length 0x80000
00:19:37.238  	 nvme0n2             :       5.06    1693.53       6.62       0.00     0.00   75352.22   13054.56   65693.92
00:19:37.238  Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0x80000 length 0x80000
00:19:37.238  	 nvme0n2             :       5.06    1693.33       6.61       0.00     0.00   75356.15   10948.99   86328.55
00:19:37.238  Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0x0 length 0x80000
00:19:37.238  	 nvme0n3             :       5.07    1690.62       6.60       0.00     0.00   75353.65   15160.13   67378.38
00:19:37.238  Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0x80000 length 0x80000
00:19:37.238  	 nvme0n3             :       5.08    1689.72       6.60       0.00     0.00   75395.92    8843.41   83801.86
00:19:37.238  Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0x0 length 0x20000
00:19:37.238  	 nvme1n1             :       5.07    1690.15       6.60       0.00     0.00   75267.23    9527.72   72852.87
00:19:37.238  Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0x20000 length 0x20000
00:19:37.238  	 nvme1n1             :       5.09    1686.18       6.59       0.00     0.00   75440.87    8738.13   74116.22
00:19:37.238  Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0x0 length 0xbd0bd
00:19:37.238  	 nvme2n1             :       5.07    2587.53      10.11       0.00     0.00   48962.12    5606.09   63588.34
00:19:37.238  Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0xbd0bd length 0xbd0bd
00:19:37.238  	 nvme2n1             :       5.08    2556.21       9.99       0.00     0.00   49577.64    6632.56   58956.08
00:19:37.238  Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0x0 length 0xa0000
00:19:37.238  	 nvme3n1             :       5.05    1672.55       6.53       0.00     0.00   75841.29   10580.51   78748.48
00:19:37.238  Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:37.238  	 Verification LBA range: start 0xa0000 length 0xa0000
00:19:37.238  	 nvme3n1             :       5.08    1611.49       6.29       0.00     0.00   78500.88   10580.51   93066.38
00:19:37.238  
[2024-12-09T16:30:06.417Z]  ===================================================================================================================
00:19:37.238  
[2024-12-09T16:30:06.417Z]  Total                       :              21964.37      85.80       0.00     0.00   69506.17    5606.09   93066.38
00:19:38.618  
00:19:38.618  real	0m7.231s
00:19:38.618  user	0m11.087s
00:19:38.618  sys	0m1.987s
00:19:38.618   16:30:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:38.618   16:30:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:19:38.618  ************************************
00:19:38.618  END TEST bdev_verify
00:19:38.618  ************************************
00:19:38.618   16:30:07 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:19:38.618   16:30:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:19:38.618   16:30:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:38.618   16:30:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:38.618  ************************************
00:19:38.618  START TEST bdev_verify_big_io
00:19:38.618  ************************************
00:19:38.618   16:30:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:19:38.618  [2024-12-09 16:30:07.540883] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:19:38.618  [2024-12-09 16:30:07.541546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75733 ]
00:19:38.618  [2024-12-09 16:30:07.728209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:19:38.877  [2024-12-09 16:30:07.871347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:38.877  [2024-12-09 16:30:07.871370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:19:39.445  Running I/O for 5 seconds...
00:19:44.113       1600.00 IOPS,   100.00 MiB/s
[2024-12-09T16:30:14.230Z]      3273.00 IOPS,   204.56 MiB/s
[2024-12-09T16:30:14.799Z]      3867.33 IOPS,   241.71 MiB/s
00:19:45.620                                                                                                  Latency(us)
00:19:45.620  
[2024-12-09T16:30:14.799Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:45.620  Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0x0 length 0x8000
00:19:45.620  	 nvme0n1             :       5.43     212.18      13.26       0.00     0.00  585636.54    4579.62  512075.67
00:19:45.620  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0x8000 length 0x8000
00:19:45.620  	 nvme0n1             :       5.49     139.85       8.74       0.00     0.00  878375.17    4711.22 1381256.74
00:19:45.620  Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0x0 length 0x8000
00:19:45.620  	 nvme0n2             :       5.44     217.71      13.61       0.00     0.00  573331.70   11159.54  559240.53
00:19:45.620  Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0x8000 length 0x8000
00:19:45.620  	 nvme0n2             :       5.44      82.31       5.14       0.00     0.00 1430811.99  107805.40 2614281.05
00:19:45.620  Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0x0 length 0x8000
00:19:45.620  	 nvme0n3             :       5.43     223.86      13.99       0.00     0.00  542735.74    4948.10  889394.58
00:19:45.620  Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0x8000 length 0x8000
00:19:45.620  	 nvme0n3             :       5.70     171.37      10.71       0.00     0.00  663703.34   60640.54  764744.58
00:19:45.620  Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0x0 length 0x2000
00:19:45.620  	 nvme1n1             :       5.45     235.02      14.69       0.00     0.00  514549.92   10791.07 1003937.82
00:19:45.620  Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0x2000 length 0x2000
00:19:45.620  	 nvme1n1             :       5.77     160.71      10.04       0.00     0.00  680309.64   19266.00 1664245.92
00:19:45.620  Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0x0 length 0xbd0b
00:19:45.620  	 nvme2n1             :       5.44     348.82      21.80       0.00     0.00  340872.39    6553.60  431221.62
00:19:45.620  Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0xbd0b length 0xbd0b
00:19:45.620  	 nvme2n1             :       5.97     198.62      12.41       0.00     0.00  535485.75    8317.02 1994399.97
00:19:45.620  Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0x0 length 0xa000
00:19:45.620  	 nvme3n1             :       5.45     226.12      14.13       0.00     0.00  516259.77   10475.23  380687.83
00:19:45.620  Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:45.620  	 Verification LBA range: start 0xa000 length 0xa000
00:19:45.620  	 nvme3n1             :       6.08     259.30      16.21       0.00     0.00  397403.47     361.90 2115681.05
00:19:45.620  
[2024-12-09T16:30:14.799Z]  ===================================================================================================================
00:19:45.620  
[2024-12-09T16:30:14.799Z]  Total                       :               2475.86     154.74       0.00     0.00  564229.76     361.90 2614281.05
00:19:46.999  
00:19:46.999  real	0m8.716s
00:19:46.999  user	0m15.730s
00:19:46.999  sys	0m0.693s
00:19:46.999   16:30:16 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:46.999   16:30:16 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:19:46.999  ************************************
00:19:46.999  END TEST bdev_verify_big_io
00:19:46.999  ************************************
00:19:47.258   16:30:16 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:47.258   16:30:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:19:47.258   16:30:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:47.258   16:30:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:47.258  ************************************
00:19:47.258  START TEST bdev_write_zeroes
00:19:47.258  ************************************
00:19:47.258   16:30:16 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:47.258  [2024-12-09 16:30:16.340712] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:19:47.258  [2024-12-09 16:30:16.340842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75849 ]
00:19:47.517  [2024-12-09 16:30:16.520964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:47.517  [2024-12-09 16:30:16.654833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:48.086  Running I/O for 1 seconds...
00:19:49.466      38304.00 IOPS,   149.62 MiB/s
00:19:49.466                                                                                                  Latency(us)
00:19:49.466  
[2024-12-09T16:30:18.645Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:49.466  Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:49.466  	 nvme0n1             :       1.03    5591.76      21.84       0.00     0.00   22869.38    9738.28   36005.32
00:19:49.466  Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:49.466  	 nvme0n2             :       1.03    5583.13      21.81       0.00     0.00   22890.06    9790.92   36215.88
00:19:49.466  Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:49.466  	 nvme0n3             :       1.03    5574.47      21.78       0.00     0.00   22909.86    9896.20   36426.44
00:19:49.466  Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:49.466  	 nvme1n1             :       1.03    5566.17      21.74       0.00     0.00   22928.69   10001.48   36636.99
00:19:49.466  Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:49.466  	 nvme2n1             :       1.04    9980.75      38.99       0.00     0.00   12774.71    5290.26   24319.38
00:19:49.466  Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:49.466  	 nvme3n1             :       1.04    5520.65      21.57       0.00     0.00   22967.42    7527.43   38110.89
00:19:49.466  
[2024-12-09T16:30:18.645Z]  ===================================================================================================================
00:19:49.466  
[2024-12-09T16:30:18.645Z]  Total                       :              37816.92     147.72       0.00     0.00   20223.31    5290.26   38110.89
00:19:50.404  
00:19:50.404  real	0m3.223s
00:19:50.404  user	0m2.422s
00:19:50.404  sys	0m0.614s
00:19:50.404   16:30:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:50.404   16:30:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:19:50.404  ************************************
00:19:50.404  END TEST bdev_write_zeroes
00:19:50.404  ************************************
00:19:50.404   16:30:19 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:50.404   16:30:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:19:50.404   16:30:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:50.404   16:30:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:50.404  ************************************
00:19:50.404  START TEST bdev_json_nonenclosed
00:19:50.404  ************************************
00:19:50.404   16:30:19 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:50.663  [2024-12-09 16:30:19.646986] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:19:50.664  [2024-12-09 16:30:19.647114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75907 ]
00:19:50.664  [2024-12-09 16:30:19.830356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:50.923  [2024-12-09 16:30:19.966876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:50.923  [2024-12-09 16:30:19.966992] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:19:50.923  [2024-12-09 16:30:19.967018] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:19:50.923  [2024-12-09 16:30:19.967031] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:19:51.182  
00:19:51.182  real	0m0.698s
00:19:51.182  user	0m0.419s
00:19:51.182  sys	0m0.173s
00:19:51.182   16:30:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:51.182   16:30:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:19:51.182  ************************************
00:19:51.182  END TEST bdev_json_nonenclosed
00:19:51.182  ************************************
00:19:51.182   16:30:20 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:51.182   16:30:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:19:51.182   16:30:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:51.182   16:30:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:51.182  ************************************
00:19:51.182  START TEST bdev_json_nonarray
00:19:51.182  ************************************
00:19:51.182   16:30:20 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:51.441  [2024-12-09 16:30:20.416049] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:19:51.442  [2024-12-09 16:30:20.416217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75934 ]
00:19:51.442  [2024-12-09 16:30:20.595028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:51.701  [2024-12-09 16:30:20.729408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:51.701  [2024-12-09 16:30:20.729529] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:19:51.701  [2024-12-09 16:30:20.729555] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:19:51.701  [2024-12-09 16:30:20.729569] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:19:51.961  
00:19:51.961  real	0m0.678s
00:19:51.961  user	0m0.419s
00:19:51.961  sys	0m0.154s
00:19:51.961   16:30:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:51.961   16:30:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:19:51.961  ************************************
00:19:51.961  END TEST bdev_json_nonarray
00:19:51.961  ************************************
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]]
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]]
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]]
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]]
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]]
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]]
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]]
00:19:51.961   16:30:21 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:19:52.899  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:19:53.836  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:19:53.836  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:19:53.836  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:19:53.836  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:19:53.836  
00:19:53.836  real	0m56.867s
00:19:53.836  user	1m34.841s
00:19:53.836  sys	0m30.230s
00:19:53.836   16:30:22 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:53.836   16:30:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:53.836  ************************************
00:19:53.837  END TEST blockdev_xnvme
00:19:53.837  ************************************
00:19:53.837   16:30:22  -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh
00:19:53.837   16:30:22  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:53.837   16:30:22  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:53.837   16:30:22  -- common/autotest_common.sh@10 -- # set +x
00:19:53.837  ************************************
00:19:53.837  START TEST ublk
00:19:53.837  ************************************
00:19:53.837   16:30:22 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh
00:19:54.095  * Looking for test storage...
00:19:54.095  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk
00:19:54.095    16:30:23 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:54.095     16:30:23 ublk -- common/autotest_common.sh@1711 -- # lcov --version
00:19:54.095     16:30:23 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:54.095    16:30:23 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:54.095    16:30:23 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:54.095    16:30:23 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:54.095    16:30:23 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:54.095    16:30:23 ublk -- scripts/common.sh@336 -- # IFS=.-:
00:19:54.095    16:30:23 ublk -- scripts/common.sh@336 -- # read -ra ver1
00:19:54.095    16:30:23 ublk -- scripts/common.sh@337 -- # IFS=.-:
00:19:54.095    16:30:23 ublk -- scripts/common.sh@337 -- # read -ra ver2
00:19:54.095    16:30:23 ublk -- scripts/common.sh@338 -- # local 'op=<'
00:19:54.095    16:30:23 ublk -- scripts/common.sh@340 -- # ver1_l=2
00:19:54.095    16:30:23 ublk -- scripts/common.sh@341 -- # ver2_l=1
00:19:54.095    16:30:23 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:54.095    16:30:23 ublk -- scripts/common.sh@344 -- # case "$op" in
00:19:54.095    16:30:23 ublk -- scripts/common.sh@345 -- # : 1
00:19:54.095    16:30:23 ublk -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:54.095    16:30:23 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:54.095     16:30:23 ublk -- scripts/common.sh@365 -- # decimal 1
00:19:54.095     16:30:23 ublk -- scripts/common.sh@353 -- # local d=1
00:19:54.095     16:30:23 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:54.095     16:30:23 ublk -- scripts/common.sh@355 -- # echo 1
00:19:54.096    16:30:23 ublk -- scripts/common.sh@365 -- # ver1[v]=1
00:19:54.096     16:30:23 ublk -- scripts/common.sh@366 -- # decimal 2
00:19:54.096     16:30:23 ublk -- scripts/common.sh@353 -- # local d=2
00:19:54.096     16:30:23 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:54.096     16:30:23 ublk -- scripts/common.sh@355 -- # echo 2
00:19:54.096    16:30:23 ublk -- scripts/common.sh@366 -- # ver2[v]=2
00:19:54.096    16:30:23 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:54.096    16:30:23 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:54.096    16:30:23 ublk -- scripts/common.sh@368 -- # return 0
00:19:54.096    16:30:23 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:54.096    16:30:23 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:54.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:54.096  		--rc genhtml_branch_coverage=1
00:19:54.096  		--rc genhtml_function_coverage=1
00:19:54.096  		--rc genhtml_legend=1
00:19:54.096  		--rc geninfo_all_blocks=1
00:19:54.096  		--rc geninfo_unexecuted_blocks=1
00:19:54.096  		
00:19:54.096  		'
00:19:54.096    16:30:23 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:54.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:54.096  		--rc genhtml_branch_coverage=1
00:19:54.096  		--rc genhtml_function_coverage=1
00:19:54.096  		--rc genhtml_legend=1
00:19:54.096  		--rc geninfo_all_blocks=1
00:19:54.096  		--rc geninfo_unexecuted_blocks=1
00:19:54.096  		
00:19:54.096  		'
00:19:54.096    16:30:23 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:54.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:54.096  		--rc genhtml_branch_coverage=1
00:19:54.096  		--rc genhtml_function_coverage=1
00:19:54.096  		--rc genhtml_legend=1
00:19:54.096  		--rc geninfo_all_blocks=1
00:19:54.096  		--rc geninfo_unexecuted_blocks=1
00:19:54.096  		
00:19:54.096  		'
00:19:54.096    16:30:23 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:54.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:54.096  		--rc genhtml_branch_coverage=1
00:19:54.096  		--rc genhtml_function_coverage=1
00:19:54.096  		--rc genhtml_legend=1
00:19:54.096  		--rc geninfo_all_blocks=1
00:19:54.096  		--rc geninfo_unexecuted_blocks=1
00:19:54.096  		
00:19:54.096  		'
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh
00:19:54.096    16:30:23 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128
00:19:54.096    16:30:23 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512
00:19:54.096    16:30:23 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400
00:19:54.096    16:30:23 ublk -- lvol/common.sh@9 -- # AIO_BS=4096
00:19:54.096    16:30:23 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4
00:19:54.096    16:30:23 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304
00:19:54.096    16:30:23 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124
00:19:54.096    16:30:23 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]]
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv
00:19:54.096   16:30:23 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config
00:19:54.096   16:30:23 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:54.096   16:30:23 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:54.096   16:30:23 ublk -- common/autotest_common.sh@10 -- # set +x
00:19:54.096  ************************************
00:19:54.096  START TEST test_save_ublk_config
00:19:54.096  ************************************
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76230
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76230
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76230 ']'
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:54.096  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:54.096   16:30:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:19:54.355  [2024-12-09 16:30:23.303814] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:19:54.355  [2024-12-09 16:30:23.303960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76230 ]
00:19:54.355  [2024-12-09 16:30:23.485137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:54.615  [2024-12-09 16:30:23.612029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:55.553   16:30:24 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:55.553   16:30:24 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0
00:19:55.553   16:30:24 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0
00:19:55.553   16:30:24 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd
00:19:55.553   16:30:24 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:55.553   16:30:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:19:55.553  [2024-12-09 16:30:24.642947] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:19:55.553  [2024-12-09 16:30:24.644194] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:19:55.813  malloc0
00:19:55.813  [2024-12-09 16:30:24.747072] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128
00:19:55.813  [2024-12-09 16:30:24.747177] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0
00:19:55.813  [2024-12-09 16:30:24.747191] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:19:55.813  [2024-12-09 16:30:24.747200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:19:55.813  [2024-12-09 16:30:24.756112] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:19:55.813  [2024-12-09 16:30:24.756135] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:19:55.813  [2024-12-09 16:30:24.762946] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:19:55.813  [2024-12-09 16:30:24.763052] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:19:55.813  [2024-12-09 16:30:24.779939] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:19:55.813  0
00:19:55.813   16:30:24 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:55.813    16:30:24 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config
00:19:55.813    16:30:24 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:55.813    16:30:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:19:56.079    16:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:56.079   16:30:25 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{
00:19:56.079  "subsystems": [
00:19:56.079  {
00:19:56.079  "subsystem": "fsdev",
00:19:56.079  "config": [
00:19:56.079  {
00:19:56.079  "method": "fsdev_set_opts",
00:19:56.079  "params": {
00:19:56.079  "fsdev_io_pool_size": 65535,
00:19:56.079  "fsdev_io_cache_size": 256
00:19:56.079  }
00:19:56.079  }
00:19:56.079  ]
00:19:56.079  },
00:19:56.079  {
00:19:56.079  "subsystem": "keyring",
00:19:56.079  "config": []
00:19:56.079  },
00:19:56.079  {
00:19:56.079  "subsystem": "iobuf",
00:19:56.079  "config": [
00:19:56.079  {
00:19:56.079  "method": "iobuf_set_options",
00:19:56.079  "params": {
00:19:56.079  "small_pool_count": 8192,
00:19:56.079  "large_pool_count": 1024,
00:19:56.079  "small_bufsize": 8192,
00:19:56.079  "large_bufsize": 135168,
00:19:56.079  "enable_numa": false
00:19:56.079  }
00:19:56.079  }
00:19:56.079  ]
00:19:56.079  },
00:19:56.079  {
00:19:56.079  "subsystem": "sock",
00:19:56.079  "config": [
00:19:56.079  {
00:19:56.079  "method": "sock_set_default_impl",
00:19:56.079  "params": {
00:19:56.079  "impl_name": "posix"
00:19:56.079  }
00:19:56.079  },
00:19:56.079  {
00:19:56.079  "method": "sock_impl_set_options",
00:19:56.080  "params": {
00:19:56.080  "impl_name": "ssl",
00:19:56.080  "recv_buf_size": 4096,
00:19:56.080  "send_buf_size": 4096,
00:19:56.080  "enable_recv_pipe": true,
00:19:56.080  "enable_quickack": false,
00:19:56.080  "enable_placement_id": 0,
00:19:56.080  "enable_zerocopy_send_server": true,
00:19:56.080  "enable_zerocopy_send_client": false,
00:19:56.080  "zerocopy_threshold": 0,
00:19:56.080  "tls_version": 0,
00:19:56.080  "enable_ktls": false
00:19:56.080  }
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "method": "sock_impl_set_options",
00:19:56.080  "params": {
00:19:56.080  "impl_name": "posix",
00:19:56.080  "recv_buf_size": 2097152,
00:19:56.080  "send_buf_size": 2097152,
00:19:56.080  "enable_recv_pipe": true,
00:19:56.080  "enable_quickack": false,
00:19:56.080  "enable_placement_id": 0,
00:19:56.080  "enable_zerocopy_send_server": true,
00:19:56.080  "enable_zerocopy_send_client": false,
00:19:56.080  "zerocopy_threshold": 0,
00:19:56.080  "tls_version": 0,
00:19:56.080  "enable_ktls": false
00:19:56.080  }
00:19:56.080  }
00:19:56.080  ]
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "subsystem": "vmd",
00:19:56.080  "config": []
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "subsystem": "accel",
00:19:56.080  "config": [
00:19:56.080  {
00:19:56.080  "method": "accel_set_options",
00:19:56.080  "params": {
00:19:56.080  "small_cache_size": 128,
00:19:56.080  "large_cache_size": 16,
00:19:56.080  "task_count": 2048,
00:19:56.080  "sequence_count": 2048,
00:19:56.080  "buf_count": 2048
00:19:56.080  }
00:19:56.080  }
00:19:56.080  ]
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "subsystem": "bdev",
00:19:56.080  "config": [
00:19:56.080  {
00:19:56.080  "method": "bdev_set_options",
00:19:56.080  "params": {
00:19:56.080  "bdev_io_pool_size": 65535,
00:19:56.080  "bdev_io_cache_size": 256,
00:19:56.080  "bdev_auto_examine": true,
00:19:56.080  "iobuf_small_cache_size": 128,
00:19:56.080  "iobuf_large_cache_size": 16
00:19:56.080  }
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "method": "bdev_raid_set_options",
00:19:56.080  "params": {
00:19:56.080  "process_window_size_kb": 1024,
00:19:56.080  "process_max_bandwidth_mb_sec": 0
00:19:56.080  }
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "method": "bdev_iscsi_set_options",
00:19:56.080  "params": {
00:19:56.080  "timeout_sec": 30
00:19:56.080  }
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "method": "bdev_nvme_set_options",
00:19:56.080  "params": {
00:19:56.080  "action_on_timeout": "none",
00:19:56.080  "timeout_us": 0,
00:19:56.080  "timeout_admin_us": 0,
00:19:56.080  "keep_alive_timeout_ms": 10000,
00:19:56.080  "arbitration_burst": 0,
00:19:56.080  "low_priority_weight": 0,
00:19:56.080  "medium_priority_weight": 0,
00:19:56.080  "high_priority_weight": 0,
00:19:56.080  "nvme_adminq_poll_period_us": 10000,
00:19:56.080  "nvme_ioq_poll_period_us": 0,
00:19:56.080  "io_queue_requests": 0,
00:19:56.080  "delay_cmd_submit": true,
00:19:56.080  "transport_retry_count": 4,
00:19:56.080  "bdev_retry_count": 3,
00:19:56.080  "transport_ack_timeout": 0,
00:19:56.080  "ctrlr_loss_timeout_sec": 0,
00:19:56.080  "reconnect_delay_sec": 0,
00:19:56.080  "fast_io_fail_timeout_sec": 0,
00:19:56.080  "disable_auto_failback": false,
00:19:56.080  "generate_uuids": false,
00:19:56.080  "transport_tos": 0,
00:19:56.080  "nvme_error_stat": false,
00:19:56.080  "rdma_srq_size": 0,
00:19:56.080  "io_path_stat": false,
00:19:56.080  "allow_accel_sequence": false,
00:19:56.080  "rdma_max_cq_size": 0,
00:19:56.080  "rdma_cm_event_timeout_ms": 0,
00:19:56.080  "dhchap_digests": [
00:19:56.080  "sha256",
00:19:56.080  "sha384",
00:19:56.080  "sha512"
00:19:56.080  ],
00:19:56.080  "dhchap_dhgroups": [
00:19:56.080  "null",
00:19:56.080  "ffdhe2048",
00:19:56.080  "ffdhe3072",
00:19:56.080  "ffdhe4096",
00:19:56.080  "ffdhe6144",
00:19:56.080  "ffdhe8192"
00:19:56.080  ]
00:19:56.080  }
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "method": "bdev_nvme_set_hotplug",
00:19:56.080  "params": {
00:19:56.080  "period_us": 100000,
00:19:56.080  "enable": false
00:19:56.080  }
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "method": "bdev_malloc_create",
00:19:56.080  "params": {
00:19:56.080  "name": "malloc0",
00:19:56.080  "num_blocks": 8192,
00:19:56.080  "block_size": 4096,
00:19:56.080  "physical_block_size": 4096,
00:19:56.080  "uuid": "8c47ab05-10b2-4996-b948-af66c8bcdd4d",
00:19:56.080  "optimal_io_boundary": 0,
00:19:56.080  "md_size": 0,
00:19:56.080  "dif_type": 0,
00:19:56.080  "dif_is_head_of_md": false,
00:19:56.080  "dif_pi_format": 0
00:19:56.080  }
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "method": "bdev_wait_for_examine"
00:19:56.080  }
00:19:56.080  ]
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "subsystem": "scsi",
00:19:56.080  "config": null
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "subsystem": "scheduler",
00:19:56.080  "config": [
00:19:56.080  {
00:19:56.080  "method": "framework_set_scheduler",
00:19:56.080  "params": {
00:19:56.080  "name": "static"
00:19:56.080  }
00:19:56.080  }
00:19:56.080  ]
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "subsystem": "vhost_scsi",
00:19:56.080  "config": []
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "subsystem": "vhost_blk",
00:19:56.080  "config": []
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "subsystem": "ublk",
00:19:56.080  "config": [
00:19:56.080  {
00:19:56.080  "method": "ublk_create_target",
00:19:56.080  "params": {
00:19:56.080  "cpumask": "1"
00:19:56.080  }
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "method": "ublk_start_disk",
00:19:56.080  "params": {
00:19:56.080  "bdev_name": "malloc0",
00:19:56.080  "ublk_id": 0,
00:19:56.080  "num_queues": 1,
00:19:56.080  "queue_depth": 128
00:19:56.080  }
00:19:56.080  }
00:19:56.080  ]
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "subsystem": "nbd",
00:19:56.080  "config": []
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "subsystem": "nvmf",
00:19:56.080  "config": [
00:19:56.080  {
00:19:56.080  "method": "nvmf_set_config",
00:19:56.080  "params": {
00:19:56.080  "discovery_filter": "match_any",
00:19:56.080  "admin_cmd_passthru": {
00:19:56.080  "identify_ctrlr": false
00:19:56.080  },
00:19:56.080  "dhchap_digests": [
00:19:56.080  "sha256",
00:19:56.080  "sha384",
00:19:56.080  "sha512"
00:19:56.080  ],
00:19:56.080  "dhchap_dhgroups": [
00:19:56.080  "null",
00:19:56.080  "ffdhe2048",
00:19:56.080  "ffdhe3072",
00:19:56.080  "ffdhe4096",
00:19:56.080  "ffdhe6144",
00:19:56.080  "ffdhe8192"
00:19:56.080  ]
00:19:56.080  }
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "method": "nvmf_set_max_subsystems",
00:19:56.080  "params": {
00:19:56.080  "max_subsystems": 1024
00:19:56.080  }
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "method": "nvmf_set_crdt",
00:19:56.080  "params": {
00:19:56.080  "crdt1": 0,
00:19:56.080  "crdt2": 0,
00:19:56.080  "crdt3": 0
00:19:56.080  }
00:19:56.080  }
00:19:56.080  ]
00:19:56.080  },
00:19:56.080  {
00:19:56.080  "subsystem": "iscsi",
00:19:56.080  "config": [
00:19:56.080  {
00:19:56.080  "method": "iscsi_set_options",
00:19:56.080  "params": {
00:19:56.080  "node_base": "iqn.2016-06.io.spdk",
00:19:56.080  "max_sessions": 128,
00:19:56.080  "max_connections_per_session": 2,
00:19:56.080  "max_queue_depth": 64,
00:19:56.080  "default_time2wait": 2,
00:19:56.080  "default_time2retain": 20,
00:19:56.080  "first_burst_length": 8192,
00:19:56.080  "immediate_data": true,
00:19:56.080  "allow_duplicated_isid": false,
00:19:56.080  "error_recovery_level": 0,
00:19:56.080  "nop_timeout": 60,
00:19:56.080  "nop_in_interval": 30,
00:19:56.080  "disable_chap": false,
00:19:56.080  "require_chap": false,
00:19:56.080  "mutual_chap": false,
00:19:56.080  "chap_group": 0,
00:19:56.080  "max_large_datain_per_connection": 64,
00:19:56.080  "max_r2t_per_connection": 4,
00:19:56.080  "pdu_pool_size": 36864,
00:19:56.080  "immediate_data_pool_size": 16384,
00:19:56.080  "data_out_pool_size": 2048
00:19:56.080  }
00:19:56.080  }
00:19:56.080  ]
00:19:56.080  }
00:19:56.080  ]
00:19:56.080  }'
00:19:56.080   16:30:25 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76230
00:19:56.080   16:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76230 ']'
00:19:56.080   16:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76230
00:19:56.080    16:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname
00:19:56.080   16:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:56.080    16:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76230
00:19:56.080   16:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:56.080   16:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:56.080  killing process with pid 76230
00:19:56.080   16:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76230'
00:19:56.080   16:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76230
00:19:56.080   16:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76230
00:19:58.042  [2024-12-09 16:30:27.042715] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:19:58.042  [2024-12-09 16:30:27.081020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:19:58.042  [2024-12-09 16:30:27.081165] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:19:58.042  [2024-12-09 16:30:27.085935] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:19:58.042  [2024-12-09 16:30:27.085986] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:19:58.042  [2024-12-09 16:30:27.086003] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:19:58.042  [2024-12-09 16:30:27.086034] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:19:58.042  [2024-12-09 16:30:27.086194] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:19:59.948   16:30:29 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76301
00:19:59.948   16:30:29 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76301
00:19:59.948   16:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76301 ']'
00:19:59.948   16:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:59.948   16:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:59.948   16:30:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63
00:19:59.948  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:59.948   16:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:59.948   16:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:59.948   16:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:19:59.948    16:30:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{
00:19:59.948  "subsystems": [
00:19:59.948  {
00:19:59.948  "subsystem": "fsdev",
00:19:59.948  "config": [
00:19:59.948  {
00:19:59.948  "method": "fsdev_set_opts",
00:19:59.948  "params": {
00:19:59.948  "fsdev_io_pool_size": 65535,
00:19:59.948  "fsdev_io_cache_size": 256
00:19:59.948  }
00:19:59.948  }
00:19:59.948  ]
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "subsystem": "keyring",
00:19:59.948  "config": []
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "subsystem": "iobuf",
00:19:59.948  "config": [
00:19:59.948  {
00:19:59.948  "method": "iobuf_set_options",
00:19:59.948  "params": {
00:19:59.948  "small_pool_count": 8192,
00:19:59.948  "large_pool_count": 1024,
00:19:59.948  "small_bufsize": 8192,
00:19:59.948  "large_bufsize": 135168,
00:19:59.948  "enable_numa": false
00:19:59.948  }
00:19:59.948  }
00:19:59.948  ]
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "subsystem": "sock",
00:19:59.948  "config": [
00:19:59.948  {
00:19:59.948  "method": "sock_set_default_impl",
00:19:59.948  "params": {
00:19:59.948  "impl_name": "posix"
00:19:59.948  }
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "method": "sock_impl_set_options",
00:19:59.948  "params": {
00:19:59.948  "impl_name": "ssl",
00:19:59.948  "recv_buf_size": 4096,
00:19:59.948  "send_buf_size": 4096,
00:19:59.948  "enable_recv_pipe": true,
00:19:59.948  "enable_quickack": false,
00:19:59.948  "enable_placement_id": 0,
00:19:59.948  "enable_zerocopy_send_server": true,
00:19:59.948  "enable_zerocopy_send_client": false,
00:19:59.948  "zerocopy_threshold": 0,
00:19:59.948  "tls_version": 0,
00:19:59.948  "enable_ktls": false
00:19:59.948  }
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "method": "sock_impl_set_options",
00:19:59.948  "params": {
00:19:59.948  "impl_name": "posix",
00:19:59.948  "recv_buf_size": 2097152,
00:19:59.948  "send_buf_size": 2097152,
00:19:59.948  "enable_recv_pipe": true,
00:19:59.948  "enable_quickack": false,
00:19:59.948  "enable_placement_id": 0,
00:19:59.948  "enable_zerocopy_send_server": true,
00:19:59.948  "enable_zerocopy_send_client": false,
00:19:59.948  "zerocopy_threshold": 0,
00:19:59.948  "tls_version": 0,
00:19:59.948  "enable_ktls": false
00:19:59.948  }
00:19:59.948  }
00:19:59.948  ]
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "subsystem": "vmd",
00:19:59.948  "config": []
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "subsystem": "accel",
00:19:59.948  "config": [
00:19:59.948  {
00:19:59.948  "method": "accel_set_options",
00:19:59.948  "params": {
00:19:59.948  "small_cache_size": 128,
00:19:59.948  "large_cache_size": 16,
00:19:59.948  "task_count": 2048,
00:19:59.948  "sequence_count": 2048,
00:19:59.948  "buf_count": 2048
00:19:59.948  }
00:19:59.948  }
00:19:59.948  ]
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "subsystem": "bdev",
00:19:59.948  "config": [
00:19:59.948  {
00:19:59.948  "method": "bdev_set_options",
00:19:59.948  "params": {
00:19:59.948  "bdev_io_pool_size": 65535,
00:19:59.948  "bdev_io_cache_size": 256,
00:19:59.948  "bdev_auto_examine": true,
00:19:59.948  "iobuf_small_cache_size": 128,
00:19:59.948  "iobuf_large_cache_size": 16
00:19:59.948  }
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "method": "bdev_raid_set_options",
00:19:59.948  "params": {
00:19:59.948  "process_window_size_kb": 1024,
00:19:59.948  "process_max_bandwidth_mb_sec": 0
00:19:59.948  }
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "method": "bdev_iscsi_set_options",
00:19:59.948  "params": {
00:19:59.948  "timeout_sec": 30
00:19:59.948  }
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "method": "bdev_nvme_set_options",
00:19:59.948  "params": {
00:19:59.948  "action_on_timeout": "none",
00:19:59.948  "timeout_us": 0,
00:19:59.948  "timeout_admin_us": 0,
00:19:59.948  "keep_alive_timeout_ms": 10000,
00:19:59.948  "arbitration_burst": 0,
00:19:59.948  "low_priority_weight": 0,
00:19:59.948  "medium_priority_weight": 0,
00:19:59.948  "high_priority_weight": 0,
00:19:59.948  "nvme_adminq_poll_period_us": 10000,
00:19:59.948  "nvme_ioq_poll_period_us": 0,
00:19:59.948  "io_queue_requests": 0,
00:19:59.948  "delay_cmd_submit": true,
00:19:59.948  "transport_retry_count": 4,
00:19:59.948  "bdev_retry_count": 3,
00:19:59.948  "transport_ack_timeout": 0,
00:19:59.948  "ctrlr_loss_timeout_sec": 0,
00:19:59.948  "reconnect_delay_sec": 0,
00:19:59.948  "fast_io_fail_timeout_sec": 0,
00:19:59.948  "disable_auto_failback": false,
00:19:59.948  "generate_uuids": false,
00:19:59.948  "transport_tos": 0,
00:19:59.948  "nvme_error_stat": false,
00:19:59.948  "rdma_srq_size": 0,
00:19:59.948  "io_path_stat": false,
00:19:59.948  "allow_accel_sequence": false,
00:19:59.948  "rdma_max_cq_size": 0,
00:19:59.948  "rdma_cm_event_timeout_ms": 0,
00:19:59.948  "dhchap_digests": [
00:19:59.948  "sha256",
00:19:59.948  "sha384",
00:19:59.948  "sha512"
00:19:59.948  ],
00:19:59.948  "dhchap_dhgroups": [
00:19:59.948  "null",
00:19:59.948  "ffdhe2048",
00:19:59.948  "ffdhe3072",
00:19:59.948  "ffdhe4096",
00:19:59.948  "ffdhe6144",
00:19:59.948  "ffdhe8192"
00:19:59.948  ]
00:19:59.948  }
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "method": "bdev_nvme_set_hotplug",
00:19:59.948  "params": {
00:19:59.948  "period_us": 100000,
00:19:59.948  "enable": false
00:19:59.948  }
00:19:59.948  },
00:19:59.948  {
00:19:59.948  "method": "bdev_malloc_create",
00:19:59.948  "params": {
00:19:59.948  "name": "malloc0",
00:19:59.948  "num_blocks": 8192,
00:19:59.948  "block_size": 4096,
00:19:59.948  "physical_block_size": 4096,
00:19:59.949  "uuid": "8c47ab05-10b2-4996-b948-af66c8bcdd4d",
00:19:59.949  "optimal_io_boundary": 0,
00:19:59.949  "md_size": 0,
00:19:59.949  "dif_type": 0,
00:19:59.949  "dif_is_head_of_md": false,
00:19:59.949  "dif_pi_format": 0
00:19:59.949  }
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "method": "bdev_wait_for_examine"
00:19:59.949  }
00:19:59.949  ]
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "subsystem": "scsi",
00:19:59.949  "config": null
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "subsystem": "scheduler",
00:19:59.949  "config": [
00:19:59.949  {
00:19:59.949  "method": "framework_set_scheduler",
00:19:59.949  "params": {
00:19:59.949  "name": "static"
00:19:59.949  }
00:19:59.949  }
00:19:59.949  ]
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "subsystem": "vhost_scsi",
00:19:59.949  "config": []
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "subsystem": "vhost_blk",
00:19:59.949  "config": []
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "subsystem": "ublk",
00:19:59.949  "config": [
00:19:59.949  {
00:19:59.949  "method": "ublk_create_target",
00:19:59.949  "params": {
00:19:59.949  "cpumask": "1"
00:19:59.949  }
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "method": "ublk_start_disk",
00:19:59.949  "params": {
00:19:59.949  "bdev_name": "malloc0",
00:19:59.949  "ublk_id": 0,
00:19:59.949  "num_queues": 1,
00:19:59.949  "queue_depth": 128
00:19:59.949  }
00:19:59.949  }
00:19:59.949  ]
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "subsystem": "nbd",
00:19:59.949  "config": []
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "subsystem": "nvmf",
00:19:59.949  "config": [
00:19:59.949  {
00:19:59.949  "method": "nvmf_set_config",
00:19:59.949  "params": {
00:19:59.949  "discovery_filter": "match_any",
00:19:59.949  "admin_cmd_passthru": {
00:19:59.949  "identify_ctrlr": false
00:19:59.949  },
00:19:59.949  "dhchap_digests": [
00:19:59.949  "sha256",
00:19:59.949  "sha384",
00:19:59.949  "sha512"
00:19:59.949  ],
00:19:59.949  "dhchap_dhgroups": [
00:19:59.949  "null",
00:19:59.949  "ffdhe2048",
00:19:59.949  "ffdhe3072",
00:19:59.949  "ffdhe4096",
00:19:59.949  "ffdhe6144",
00:19:59.949  "ffdhe8192"
00:19:59.949  ]
00:19:59.949  }
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "method": "nvmf_set_max_subsystems",
00:19:59.949  "params": {
00:19:59.949  "max_subsystems": 1024
00:19:59.949  }
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "method": "nvmf_set_crdt",
00:19:59.949  "params": {
00:19:59.949  "crdt1": 0,
00:19:59.949  "crdt2": 0,
00:19:59.949  "crdt3": 0
00:19:59.949  }
00:19:59.949  }
00:19:59.949  ]
00:19:59.949  },
00:19:59.949  {
00:19:59.949  "subsystem": "iscsi",
00:19:59.949  "config": [
00:19:59.949  {
00:19:59.949  "method": "iscsi_set_options",
00:19:59.949  "params": {
00:19:59.949  "node_base": "iqn.2016-06.io.spdk",
00:19:59.949  "max_sessions": 128,
00:19:59.949  "max_connections_per_session": 2,
00:19:59.949  "max_queue_depth": 64,
00:19:59.949  "default_time2wait": 2,
00:19:59.949  "default_time2retain": 20,
00:19:59.949  "first_burst_length": 8192,
00:19:59.949  "immediate_data": true,
00:19:59.949  "allow_duplicated_isid": false,
00:19:59.949  "error_recovery_level": 0,
00:19:59.949  "nop_timeout": 60,
00:19:59.949  "nop_in_interval": 30,
00:19:59.949  "disable_chap": false,
00:19:59.949  "require_chap": false,
00:19:59.949  "mutual_chap": false,
00:19:59.949  "chap_group": 0,
00:19:59.949  "max_large_datain_per_connection": 64,
00:19:59.949  "max_r2t_per_connection": 4,
00:19:59.949  "pdu_pool_size": 36864,
00:19:59.949  "immediate_data_pool_size": 16384,
00:19:59.949  "data_out_pool_size": 2048
00:19:59.949  }
00:19:59.949  }
00:19:59.949  ]
00:19:59.949  }
00:19:59.949  ]
00:19:59.949  }'
00:20:00.208  [2024-12-09 16:30:29.148460] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:20:00.208  [2024-12-09 16:30:29.148569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76301 ]
00:20:00.208  [2024-12-09 16:30:29.325972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:00.467  [2024-12-09 16:30:29.460161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:01.845  [2024-12-09 16:30:30.604914] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:20:01.845  [2024-12-09 16:30:30.606215] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:20:01.845  [2024-12-09 16:30:30.612059] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128
00:20:01.845  [2024-12-09 16:30:30.612173] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0
00:20:01.845  [2024-12-09 16:30:30.612186] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:20:01.845  [2024-12-09 16:30:30.612194] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:20:01.845  [2024-12-09 16:30:30.619932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:20:01.845  [2024-12-09 16:30:30.619952] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:20:01.845  [2024-12-09 16:30:30.627932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:20:01.845  [2024-12-09 16:30:30.628028] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:20:01.845  [2024-12-09 16:30:30.651933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0
00:20:01.845    16:30:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks
00:20:01.845    16:30:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device'
00:20:01.845    16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:01.845    16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:20:01.845    16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]]
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]]
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76301
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76301 ']'
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76301
00:20:01.845    16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:01.845    16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76301
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:01.845  killing process with pid 76301
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76301'
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76301
00:20:01.845   16:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76301
00:20:03.224  [2024-12-09 16:30:32.281345] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:20:03.224  [2024-12-09 16:30:32.323926] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:20:03.224  [2024-12-09 16:30:32.324075] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:20:03.224  [2024-12-09 16:30:32.331927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:20:03.224  [2024-12-09 16:30:32.331977] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:20:03.224  [2024-12-09 16:30:32.331986] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:20:03.224  [2024-12-09 16:30:32.332009] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:20:03.224  [2024-12-09 16:30:32.332155] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:20:05.128   16:30:34 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT
00:20:05.128  
00:20:05.128  real	0m10.920s
00:20:05.128  user	0m7.830s
00:20:05.128  sys	0m3.813s
00:20:05.128   16:30:34 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:05.128   16:30:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:20:05.128  ************************************
00:20:05.128  END TEST test_save_ublk_config
00:20:05.128  ************************************
00:20:05.128   16:30:34 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76387
00:20:05.128   16:30:34 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk
00:20:05.128   16:30:34 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:20:05.128   16:30:34 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76387
00:20:05.128   16:30:34 ublk -- common/autotest_common.sh@835 -- # '[' -z 76387 ']'
00:20:05.128   16:30:34 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:05.128   16:30:34 ublk -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:05.128  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:05.128   16:30:34 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:05.128   16:30:34 ublk -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:05.128   16:30:34 ublk -- common/autotest_common.sh@10 -- # set +x
00:20:05.128  [2024-12-09 16:30:34.288377] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:20:05.128  [2024-12-09 16:30:34.288491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76387 ]
00:20:05.387  [2024-12-09 16:30:34.472478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:20:05.646  [2024-12-09 16:30:34.582623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:05.646  [2024-12-09 16:30:34.582629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:06.582   16:30:35 ublk -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:06.582   16:30:35 ublk -- common/autotest_common.sh@868 -- # return 0
00:20:06.582   16:30:35 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk
00:20:06.582   16:30:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:06.582   16:30:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:06.582   16:30:35 ublk -- common/autotest_common.sh@10 -- # set +x
00:20:06.582  ************************************
00:20:06.582  START TEST test_create_ublk
00:20:06.582  ************************************
00:20:06.582   16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk
00:20:06.582    16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target
00:20:06.582    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:06.582    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:06.582  [2024-12-09 16:30:35.472917] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:20:06.582  [2024-12-09 16:30:35.479555] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:20:06.582    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:06.582   16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target=
00:20:06.582    16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096
00:20:06.582    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:06.582    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:06.840    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:06.840   16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0
00:20:06.840    16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512
00:20:06.840    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:06.840    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:06.840  [2024-12-09 16:30:35.771077] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512
00:20:06.840  [2024-12-09 16:30:35.771527] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0
00:20:06.841  [2024-12-09 16:30:35.771543] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:20:06.841  [2024-12-09 16:30:35.771551] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:20:06.841  [2024-12-09 16:30:35.780295] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:20:06.841  [2024-12-09 16:30:35.780321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:20:06.841  [2024-12-09 16:30:35.786941] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:20:06.841  [2024-12-09 16:30:35.787533] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:20:06.841  [2024-12-09 16:30:35.801975] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:20:06.841    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:06.841   16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0
00:20:06.841   16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0
00:20:06.841    16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0
00:20:06.841    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:06.841    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:06.841    16:30:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:06.841   16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[
00:20:06.841  {
00:20:06.841  "ublk_device": "/dev/ublkb0",
00:20:06.841  "id": 0,
00:20:06.841  "queue_depth": 512,
00:20:06.841  "num_queues": 4,
00:20:06.841  "bdev_name": "Malloc0"
00:20:06.841  }
00:20:06.841  ]'
00:20:06.841    16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device'
00:20:06.841   16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]]
00:20:06.841    16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id'
00:20:06.841   16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]]
00:20:06.841    16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth'
00:20:06.841   16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]]
00:20:06.841    16:30:35 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues'
00:20:06.841   16:30:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]]
00:20:06.841    16:30:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name'
00:20:07.100   16:30:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]]
00:20:07.100   16:30:36 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10'
00:20:07.100   16:30:36 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0
00:20:07.100   16:30:36 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0
00:20:07.100   16:30:36 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728
00:20:07.100   16:30:36 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write
00:20:07.100   16:30:36 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc
00:20:07.100   16:30:36 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10'
00:20:07.100   16:30:36 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template=
00:20:07.100   16:30:36 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]]
00:20:07.100   16:30:36 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0'
00:20:07.100   16:30:36 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0'
00:20:07.100   16:30:36 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0
00:20:07.100  fio: verification read phase will never start because write phase uses all of runtime
00:20:07.100  fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
00:20:07.100  fio-3.35
00:20:07.100  Starting 1 process
00:20:19.312  
00:20:19.312  fio_test: (groupid=0, jobs=1): err= 0: pid=76439: Mon Dec  9 16:30:46 2024
00:20:19.312    write: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(465MiB/10001msec); 0 zone resets
00:20:19.312      clat (usec): min=40, max=3999, avg=83.17, stdev=108.74
00:20:19.312       lat (usec): min=40, max=3999, avg=83.61, stdev=108.74
00:20:19.312      clat percentiles (usec):
00:20:19.312       |  1.00th=[   44],  5.00th=[   70], 10.00th=[   73], 20.00th=[   75],
00:20:19.312       | 30.00th=[   77], 40.00th=[   78], 50.00th=[   79], 60.00th=[   80],
00:20:19.312       | 70.00th=[   82], 80.00th=[   84], 90.00th=[   87], 95.00th=[   92],
00:20:19.312       | 99.00th=[  105], 99.50th=[  114], 99.90th=[ 2343], 99.95th=[ 2999],
00:20:19.312       | 99.99th=[ 3654]
00:20:19.312     bw (  KiB/s): min=46136, max=60800, per=100.00%, avg=47746.95, stdev=3199.15, samples=19
00:20:19.312     iops        : min=11534, max=15200, avg=11936.84, stdev=799.78, samples=19
00:20:19.312    lat (usec)   : 50=3.64%, 100=94.61%, 250=1.51%, 500=0.01%, 750=0.01%
00:20:19.312    lat (usec)   : 1000=0.02%
00:20:19.312    lat (msec)   : 2=0.09%, 4=0.12%
00:20:19.312    cpu          : usr=2.24%, sys=9.43%, ctx=119101, majf=0, minf=798
00:20:19.312    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:20:19.312       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:20:19.312       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:20:19.312       issued rwts: total=0,119099,0,0 short=0,0,0,0 dropped=0,0,0,0
00:20:19.312       latency   : target=0, window=0, percentile=100.00%, depth=1
00:20:19.312  
00:20:19.312  Run status group 0 (all jobs):
00:20:19.312    WRITE: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=465MiB (488MB), run=10001-10001msec
00:20:19.312  
00:20:19.312  Disk stats (read/write):
00:20:19.312    ublkb0: ios=0/117921, merge=0/0, ticks=0/8678, in_queue=8678, util=99.13%
00:20:19.312   16:30:46 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.312  [2024-12-09 16:30:46.307235] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:20:19.312  [2024-12-09 16:30:46.347519] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:20:19.312  [2024-12-09 16:30:46.348431] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:20:19.312  [2024-12-09 16:30:46.354954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:20:19.312  [2024-12-09 16:30:46.355301] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:20:19.312  [2024-12-09 16:30:46.355319] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.312   16:30:46 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:19.312    16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.312  [2024-12-09 16:30:46.378013] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0
00:20:19.312  request:
00:20:19.312  {
00:20:19.312  "ublk_id": 0,
00:20:19.312  "method": "ublk_stop_disk",
00:20:19.312  "req_id": 1
00:20:19.312  }
00:20:19.312  Got JSON-RPC error response
00:20:19.312  response:
00:20:19.312  {
00:20:19.312  "code": -19,
00:20:19.312  "message": "No such device"
00:20:19.312  }
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:20:19.312   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:20:19.313   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:20:19.313   16:30:46 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target
00:20:19.313   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313  [2024-12-09 16:30:46.402010] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:20:19.313  [2024-12-09 16:30:46.410914] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:20:19.313  [2024-12-09 16:30:46.410958] ublk_rpc.c:  63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed
00:20:19.313   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.313   16:30:46 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0
00:20:19.313   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313   16:30:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313   16:30:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.313   16:30:47 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices
00:20:19.313    16:30:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs
00:20:19.313    16:30:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313    16:30:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313    16:30:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.313   16:30:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]'
00:20:19.313    16:30:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length
00:20:19.313   16:30:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']'
00:20:19.313    16:30:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores
00:20:19.313    16:30:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313    16:30:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313    16:30:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.313   16:30:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]'
00:20:19.313    16:30:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length
00:20:19.313   16:30:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']'
00:20:19.313  
00:20:19.313  real	0m11.768s
00:20:19.313  user	0m0.612s
00:20:19.313  sys	0m1.081s
00:20:19.313   16:30:47 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:19.313   16:30:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313  ************************************
00:20:19.313  END TEST test_create_ublk
00:20:19.313  ************************************
00:20:19.313   16:30:47 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk
00:20:19.313   16:30:47 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:19.313   16:30:47 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:19.313   16:30:47 ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313  ************************************
00:20:19.313  START TEST test_create_multi_ublk
00:20:19.313  ************************************
00:20:19.313   16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313  [2024-12-09 16:30:47.314926] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:20:19.313  [2024-12-09 16:30:47.317458] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.313   16:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target=
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3
00:20:19.313   16:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.313   16:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313  [2024-12-09 16:30:47.707105] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512
00:20:19.313  [2024-12-09 16:30:47.707560] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0
00:20:19.313  [2024-12-09 16:30:47.707572] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:20:19.313  [2024-12-09 16:30:47.707585] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:20:19.313  [2024-12-09 16:30:47.715416] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:20:19.313  [2024-12-09 16:30:47.715443] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:20:19.313  [2024-12-09 16:30:47.729924] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:20:19.313  [2024-12-09 16:30:47.730500] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:20:19.313  [2024-12-09 16:30:47.741938] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.313   16:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0
00:20:19.313   16:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313    16:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.313   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313  [2024-12-09 16:30:48.100071] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512
00:20:19.313  [2024-12-09 16:30:48.100519] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1
00:20:19.313  [2024-12-09 16:30:48.100534] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq
00:20:19.313  [2024-12-09 16:30:48.100541] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV
00:20:19.313  [2024-12-09 16:30:48.107967] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed
00:20:19.313  [2024-12-09 16:30:48.107988] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS
00:20:19.313  [2024-12-09 16:30:48.118971] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:20:19.313  [2024-12-09 16:30:48.119510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV
00:20:19.313  [2024-12-09 16:30:48.130012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.313   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1
00:20:19.313   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.313   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.313  [2024-12-09 16:30:48.396060] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512
00:20:19.313  [2024-12-09 16:30:48.396519] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2
00:20:19.313  [2024-12-09 16:30:48.396531] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq
00:20:19.313  [2024-12-09 16:30:48.396542] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV
00:20:19.313  [2024-12-09 16:30:48.403976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed
00:20:19.313  [2024-12-09 16:30:48.404001] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS
00:20:19.313  [2024-12-09 16:30:48.411958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:20:19.313  [2024-12-09 16:30:48.412532] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV
00:20:19.313  [2024-12-09 16:30:48.420977] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.313   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2
00:20:19.313   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.313    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.573    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.573   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3
00:20:19.573    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512
00:20:19.573    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.573    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.573  [2024-12-09 16:30:48.727098] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512
00:20:19.573  [2024-12-09 16:30:48.727540] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3
00:20:19.573  [2024-12-09 16:30:48.727554] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq
00:20:19.573  [2024-12-09 16:30:48.727562] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV
00:20:19.573  [2024-12-09 16:30:48.739222] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed
00:20:19.573  [2024-12-09 16:30:48.739244] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS
00:20:19.573  [2024-12-09 16:30:48.745973] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:20:19.573  [2024-12-09 16:30:48.746540] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV
00:20:19.832  [2024-12-09 16:30:48.754966] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed
00:20:19.832    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.833   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3
00:20:19.833    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks
00:20:19.833    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.833    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:19.833    16:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.833   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[
00:20:19.833  {
00:20:19.833  "ublk_device": "/dev/ublkb0",
00:20:19.833  "id": 0,
00:20:19.833  "queue_depth": 512,
00:20:19.833  "num_queues": 4,
00:20:19.833  "bdev_name": "Malloc0"
00:20:19.833  },
00:20:19.833  {
00:20:19.833  "ublk_device": "/dev/ublkb1",
00:20:19.833  "id": 1,
00:20:19.833  "queue_depth": 512,
00:20:19.833  "num_queues": 4,
00:20:19.833  "bdev_name": "Malloc1"
00:20:19.833  },
00:20:19.833  {
00:20:19.833  "ublk_device": "/dev/ublkb2",
00:20:19.833  "id": 2,
00:20:19.833  "queue_depth": 512,
00:20:19.833  "num_queues": 4,
00:20:19.833  "bdev_name": "Malloc2"
00:20:19.833  },
00:20:19.833  {
00:20:19.833  "ublk_device": "/dev/ublkb3",
00:20:19.833  "id": 3,
00:20:19.833  "queue_depth": 512,
00:20:19.833  "num_queues": 4,
00:20:19.833  "bdev_name": "Malloc3"
00:20:19.833  }
00:20:19.833  ]'
00:20:19.833    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3
00:20:19.833   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:19.833    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device'
00:20:19.833   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]]
00:20:19.833    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id'
00:20:19.833   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]]
00:20:19.833    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth'
00:20:19.833   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:20:19.833    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues'
00:20:19.833   16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:20:19.833    16:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name'
00:20:20.092   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]]
00:20:20.092   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:20.092    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device'
00:20:20.092   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]]
00:20:20.092    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id'
00:20:20.092   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]]
00:20:20.092    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth'
00:20:20.092   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:20:20.092    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues'
00:20:20.092   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:20:20.092    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name'
00:20:20.092   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]]
00:20:20.092   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:20.092    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device'
00:20:20.352   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]]
00:20:20.352    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id'
00:20:20.352   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]]
00:20:20.352    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth'
00:20:20.352   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:20:20.352    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues'
00:20:20.352   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:20:20.352    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name'
00:20:20.352   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]]
00:20:20.352   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:20.352    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device'
00:20:20.352   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]]
00:20:20.352    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id'
00:20:20.352   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]]
00:20:20.352    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth'
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:20:20.611    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues'
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:20:20.611    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name'
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]]
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]]
00:20:20.611    16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:20.611  [2024-12-09 16:30:49.630033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:20:20.611  [2024-12-09 16:30:49.672618] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:20:20.611  [2024-12-09 16:30:49.678297] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:20:20.611  [2024-12-09 16:30:49.685972] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:20:20.611  [2024-12-09 16:30:49.686318] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:20:20.611  [2024-12-09 16:30:49.686331] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:20.611   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:20.611  [2024-12-09 16:30:49.702044] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV
00:20:20.611  [2024-12-09 16:30:49.742632] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed
00:20:20.611  [2024-12-09 16:30:49.744213] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV
00:20:20.611  [2024-12-09 16:30:49.746978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed
00:20:20.611  [2024-12-09 16:30:49.747316] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq
00:20:20.612  [2024-12-09 16:30:49.747330] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped
00:20:20.612   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:20.612   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:20.612   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2
00:20:20.612   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:20.612   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:20.612  [2024-12-09 16:30:49.765042] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV
00:20:20.871  [2024-12-09 16:30:49.794639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed
00:20:20.871  [2024-12-09 16:30:49.796113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV
00:20:20.871  [2024-12-09 16:30:49.804966] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed
00:20:20.871  [2024-12-09 16:30:49.805316] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq
00:20:20.871  [2024-12-09 16:30:49.805329] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped
00:20:20.871   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:20.871   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:20.871   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3
00:20:20.871   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:20.871   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:20.871  [2024-12-09 16:30:49.820087] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV
00:20:20.871  [2024-12-09 16:30:49.855993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed
00:20:20.871  [2024-12-09 16:30:49.857333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV
00:20:20.871  [2024-12-09 16:30:49.865007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed
00:20:20.871  [2024-12-09 16:30:49.865358] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq
00:20:20.871  [2024-12-09 16:30:49.865372] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped
00:20:20.871   16:30:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:20.871   16:30:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target
00:20:21.130  [2024-12-09 16:30:50.060028] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:20:21.130  [2024-12-09 16:30:50.068900] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:20:21.130  [2024-12-09 16:30:50.068939] ublk_rpc.c:  63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed
00:20:21.130    16:30:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3
00:20:21.130   16:30:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:21.130   16:30:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0
00:20:21.130   16:30:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:21.130   16:30:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:21.698   16:30:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:21.698   16:30:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:21.698   16:30:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1
00:20:21.698   16:30:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:21.698   16:30:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:22.266   16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:22.266   16:30:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:22.266   16:30:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2
00:20:22.266   16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:22.266   16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:22.525   16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:22.525   16:30:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:22.525   16:30:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3
00:20:22.525   16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:22.525   16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:22.784   16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:22.784   16:30:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices
00:20:22.784    16:30:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs
00:20:22.784    16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:22.784    16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:22.784    16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:22.784   16:30:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]'
00:20:22.784    16:30:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length
00:20:22.784   16:30:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']'
00:20:22.784    16:30:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores
00:20:22.784    16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:22.784    16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:22.784    16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:22.784   16:30:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]'
00:20:22.784    16:30:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length
00:20:23.044   16:30:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']'
00:20:23.044  
00:20:23.044  real	0m4.690s
00:20:23.044  user	0m1.009s
00:20:23.044  sys	0m0.221s
00:20:23.044   16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:23.044  ************************************
00:20:23.044   16:30:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:23.044  END TEST test_create_multi_ublk
00:20:23.044  ************************************
00:20:23.044   16:30:52 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT
00:20:23.044   16:30:52 ublk -- ublk/ublk.sh@147 -- # cleanup
00:20:23.044   16:30:52 ublk -- ublk/ublk.sh@130 -- # killprocess 76387
00:20:23.044   16:30:52 ublk -- common/autotest_common.sh@954 -- # '[' -z 76387 ']'
00:20:23.044   16:30:52 ublk -- common/autotest_common.sh@958 -- # kill -0 76387
00:20:23.044    16:30:52 ublk -- common/autotest_common.sh@959 -- # uname
00:20:23.044   16:30:52 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:23.044    16:30:52 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76387
00:20:23.044   16:30:52 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:23.044   16:30:52 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:23.044  killing process with pid 76387
00:20:23.044   16:30:52 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76387'
00:20:23.044   16:30:52 ublk -- common/autotest_common.sh@973 -- # kill 76387
00:20:23.044   16:30:52 ublk -- common/autotest_common.sh@978 -- # wait 76387
00:20:24.423  [2024-12-09 16:30:53.180751] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:20:24.423  [2024-12-09 16:30:53.180821] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:20:25.360  
00:20:25.360  real	0m31.466s
00:20:25.360  user	0m43.978s
00:20:25.360  sys	0m10.945s
00:20:25.360   16:30:54 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:25.360  ************************************
00:20:25.360  END TEST ublk
00:20:25.360   16:30:54 ublk -- common/autotest_common.sh@10 -- # set +x
00:20:25.360  ************************************
00:20:25.360   16:30:54  -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh
00:20:25.360   16:30:54  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:25.360   16:30:54  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:25.360   16:30:54  -- common/autotest_common.sh@10 -- # set +x
00:20:25.360  ************************************
00:20:25.360  START TEST ublk_recovery
00:20:25.360  ************************************
00:20:25.360   16:30:54 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh
00:20:25.620  * Looking for test storage...
00:20:25.620  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk
00:20:25.620    16:30:54 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:25.620     16:30:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version
00:20:25.620     16:30:54 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:25.620    16:30:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-:
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-:
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<'
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@345 -- # : 1
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:25.620     16:30:54 ublk_recovery -- scripts/common.sh@365 -- # decimal 1
00:20:25.620     16:30:54 ublk_recovery -- scripts/common.sh@353 -- # local d=1
00:20:25.620     16:30:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:25.620     16:30:54 ublk_recovery -- scripts/common.sh@355 -- # echo 1
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1
00:20:25.620     16:30:54 ublk_recovery -- scripts/common.sh@366 -- # decimal 2
00:20:25.620     16:30:54 ublk_recovery -- scripts/common.sh@353 -- # local d=2
00:20:25.620     16:30:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:25.620     16:30:54 ublk_recovery -- scripts/common.sh@355 -- # echo 2
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:25.620    16:30:54 ublk_recovery -- scripts/common.sh@368 -- # return 0
00:20:25.620    16:30:54 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:25.620    16:30:54 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:25.620  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.620  		--rc genhtml_branch_coverage=1
00:20:25.620  		--rc genhtml_function_coverage=1
00:20:25.620  		--rc genhtml_legend=1
00:20:25.620  		--rc geninfo_all_blocks=1
00:20:25.620  		--rc geninfo_unexecuted_blocks=1
00:20:25.620  		
00:20:25.620  		'
00:20:25.620    16:30:54 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:25.620  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.620  		--rc genhtml_branch_coverage=1
00:20:25.620  		--rc genhtml_function_coverage=1
00:20:25.620  		--rc genhtml_legend=1
00:20:25.620  		--rc geninfo_all_blocks=1
00:20:25.621  		--rc geninfo_unexecuted_blocks=1
00:20:25.621  		
00:20:25.621  		'
00:20:25.621    16:30:54 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:25.621  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.621  		--rc genhtml_branch_coverage=1
00:20:25.621  		--rc genhtml_function_coverage=1
00:20:25.621  		--rc genhtml_legend=1
00:20:25.621  		--rc geninfo_all_blocks=1
00:20:25.621  		--rc geninfo_unexecuted_blocks=1
00:20:25.621  		
00:20:25.621  		'
00:20:25.621    16:30:54 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:25.621  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:25.621  		--rc genhtml_branch_coverage=1
00:20:25.621  		--rc genhtml_function_coverage=1
00:20:25.621  		--rc genhtml_legend=1
00:20:25.621  		--rc geninfo_all_blocks=1
00:20:25.621  		--rc geninfo_unexecuted_blocks=1
00:20:25.621  		
00:20:25.621  		'
00:20:25.621   16:30:54 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh
00:20:25.621    16:30:54 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128
00:20:25.621    16:30:54 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512
00:20:25.621    16:30:54 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400
00:20:25.621    16:30:54 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096
00:20:25.621    16:30:54 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4
00:20:25.621    16:30:54 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304
00:20:25.621    16:30:54 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124
00:20:25.621    16:30:54 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424
00:20:25.621   16:30:54 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv
00:20:25.621   16:30:54 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76814
00:20:25.621   16:30:54 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk
00:20:25.621   16:30:54 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:20:25.621   16:30:54 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76814
00:20:25.621   16:30:54 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76814 ']'
00:20:25.621   16:30:54 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:25.621   16:30:54 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:25.621  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:25.621   16:30:54 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:25.621   16:30:54 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:25.621   16:30:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:25.880  [2024-12-09 16:30:54.806407] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:20:25.880  [2024-12-09 16:30:54.806538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76814 ]
00:20:25.880  [2024-12-09 16:30:54.988528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:20:26.139  [2024-12-09 16:30:55.096963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:26.139  [2024-12-09 16:30:55.097031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:27.076   16:30:55 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:27.076   16:30:55 ublk_recovery -- common/autotest_common.sh@868 -- # return 0
00:20:27.076   16:30:55 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target
00:20:27.076   16:30:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:27.076   16:30:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:27.076  [2024-12-09 16:30:55.938919] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:20:27.076  [2024-12-09 16:30:55.941579] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:20:27.076   16:30:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:27.076   16:30:55 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096
00:20:27.076   16:30:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:27.076   16:30:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:27.076  malloc0
00:20:27.076   16:30:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:27.076   16:30:56 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128
00:20:27.076   16:30:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:27.076   16:30:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:27.076  [2024-12-09 16:30:56.083149] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128
00:20:27.076  [2024-12-09 16:30:56.083258] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1
00:20:27.076  [2024-12-09 16:30:56.083272] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq
00:20:27.076  [2024-12-09 16:30:56.083280] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV
00:20:27.076  [2024-12-09 16:30:56.090955] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed
00:20:27.076  [2024-12-09 16:30:56.090978] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS
00:20:27.076  [2024-12-09 16:30:56.098976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:20:27.076  [2024-12-09 16:30:56.099114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV
00:20:27.076  [2024-12-09 16:30:56.109081] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed
00:20:27.076  1
00:20:27.076   16:30:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:27.076   16:30:56 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1
00:20:28.013   16:30:57 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76859
00:20:28.013   16:30:57 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60
00:20:28.013   16:30:57 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5
00:20:28.273  fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:20:28.273  fio-3.35
00:20:28.273  Starting 1 process
00:20:33.552   16:31:02 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76814
00:20:33.552   16:31:02 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5
00:20:38.836  /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76814 Killed                  "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk
00:20:38.836  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:38.836   16:31:07 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76960
00:20:38.836   16:31:07 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk
00:20:38.836   16:31:07 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:20:38.836   16:31:07 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76960
00:20:38.836   16:31:07 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76960 ']'
00:20:38.836   16:31:07 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:38.836   16:31:07 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:38.836   16:31:07 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:38.836   16:31:07 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:38.836   16:31:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:38.836  [2024-12-09 16:31:07.244446] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:20:38.836  [2024-12-09 16:31:07.244575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76960 ]
00:20:38.836  [2024-12-09 16:31:07.425503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:20:38.836  [2024-12-09 16:31:07.535888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:38.836  [2024-12-09 16:31:07.535957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:39.404   16:31:08 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:39.404   16:31:08 ublk_recovery -- common/autotest_common.sh@868 -- # return 0
00:20:39.404   16:31:08 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target
00:20:39.404   16:31:08 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:39.404   16:31:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:39.404  [2024-12-09 16:31:08.351918] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:20:39.404  [2024-12-09 16:31:08.354631] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:20:39.404   16:31:08 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:39.404   16:31:08 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096
00:20:39.404   16:31:08 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:39.404   16:31:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:39.404  malloc0
00:20:39.404   16:31:08 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:39.404   16:31:08 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1
00:20:39.404   16:31:08 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:39.404   16:31:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:39.404  [2024-12-09 16:31:08.492077] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0
00:20:39.404  [2024-12-09 16:31:08.492120] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq
00:20:39.404  [2024-12-09 16:31:08.492149] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO
00:20:39.404  [2024-12-09 16:31:08.499977] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed
00:20:39.404  [2024-12-09 16:31:08.500017] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2
00:20:39.404  [2024-12-09 16:31:08.500027] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda
00:20:39.404  [2024-12-09 16:31:08.500123] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY
00:20:39.404  1
00:20:39.404   16:31:08 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:39.404   16:31:08 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76859
00:20:39.404  [2024-12-09 16:31:08.507958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed
00:20:39.404  [2024-12-09 16:31:08.515681] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY
00:20:39.404  [2024-12-09 16:31:08.523168] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed
00:20:39.404  [2024-12-09 16:31:08.523195] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully
00:21:35.709  
00:21:35.709  fio_test: (groupid=0, jobs=1): err= 0: pid=76863: Mon Dec  9 16:31:57 2024
00:21:35.709    read: IOPS=18.1k, BW=70.6MiB/s (74.0MB/s)(4235MiB/60002msec)
00:21:35.709      slat (usec): min=2, max=523, avg= 9.47, stdev= 2.40
00:21:35.709      clat (usec): min=1431, max=6409.3k, avg=3459.73, stdev=47643.22
00:21:35.709       lat (usec): min=1440, max=6409.3k, avg=3469.20, stdev=47643.22
00:21:35.709      clat percentiles (usec):
00:21:35.709       |  1.00th=[ 2409],  5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 2900],
00:21:35.709       | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032],
00:21:35.709       | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3326], 95.00th=[ 4080],
00:21:35.709       | 99.00th=[ 5473], 99.50th=[ 6325], 99.90th=[ 7635], 99.95th=[ 9241],
00:21:35.709       | 99.99th=[13304]
00:21:35.709     bw (  KiB/s): min=27104, max=83256, per=100.00%, avg=80385.83, stdev=7422.72, samples=107
00:21:35.709     iops        : min= 6776, max=20814, avg=20096.43, stdev=1855.68, samples=107
00:21:35.709    write: IOPS=18.1k, BW=70.5MiB/s (74.0MB/s)(4233MiB/60002msec); 0 zone resets
00:21:35.709      slat (usec): min=2, max=2393, avg= 9.47, stdev= 3.30
00:21:35.709      clat (usec): min=1486, max=6409.5k, avg=3605.72, stdev=50735.67
00:21:35.709       lat (usec): min=1497, max=6409.5k, avg=3615.19, stdev=50735.68
00:21:35.709      clat percentiles (usec):
00:21:35.709       |  1.00th=[ 2474],  5.00th=[ 2671], 10.00th=[ 2868], 20.00th=[ 2999],
00:21:35.709       | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3163],
00:21:35.709       | 70.00th=[ 3195], 80.00th=[ 3261], 90.00th=[ 3392], 95.00th=[ 4080],
00:21:35.709       | 99.00th=[ 5473], 99.50th=[ 6325], 99.90th=[ 7701], 99.95th=[ 9241],
00:21:35.709       | 99.99th=[13566]
00:21:35.709     bw (  KiB/s): min=27544, max=83240, per=100.00%, avg=80345.35, stdev=7402.27, samples=107
00:21:35.709     iops        : min= 6886, max=20810, avg=20086.31, stdev=1850.56, samples=107
00:21:35.709    lat (msec)   : 2=0.02%, 4=94.63%, 10=5.32%, 20=0.02%, >=2000=0.01%
00:21:35.709    cpu          : usr=12.90%, sys=34.23%, ctx=101515, majf=0, minf=13
00:21:35.709    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:21:35.709       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:21:35.709       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:21:35.709       issued rwts: total=1084157,1083540,0,0 short=0,0,0,0 dropped=0,0,0,0
00:21:35.709       latency   : target=0, window=0, percentile=100.00%, depth=128
00:21:35.709  
00:21:35.709  Run status group 0 (all jobs):
00:21:35.709     READ: bw=70.6MiB/s (74.0MB/s), 70.6MiB/s-70.6MiB/s (74.0MB/s-74.0MB/s), io=4235MiB (4441MB), run=60002-60002msec
00:21:35.709    WRITE: bw=70.5MiB/s (74.0MB/s), 70.5MiB/s-70.5MiB/s (74.0MB/s-74.0MB/s), io=4233MiB (4438MB), run=60002-60002msec
00:21:35.709  
00:21:35.709  Disk stats (read/write):
00:21:35.709    ublkb1: ios=1081988/1081304, merge=0/0, ticks=3630294/3650424, in_queue=7280718, util=99.96%
00:21:35.709   16:31:57 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1
00:21:35.709   16:31:57 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:35.709   16:31:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:21:35.710  [2024-12-09 16:31:57.401762] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV
00:21:35.710  [2024-12-09 16:31:57.436076] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed
00:21:35.710  [2024-12-09 16:31:57.436264] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV
00:21:35.710  [2024-12-09 16:31:57.443983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed
00:21:35.710  [2024-12-09 16:31:57.444147] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq
00:21:35.710  [2024-12-09 16:31:57.444160] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:35.710   16:31:57 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:21:35.710  [2024-12-09 16:31:57.459088] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:21:35.710  [2024-12-09 16:31:57.467928] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:21:35.710  [2024-12-09 16:31:57.467974] ublk_rpc.c:  63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:35.710   16:31:57 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT
00:21:35.710   16:31:57 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup
00:21:35.710   16:31:57 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76960
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76960 ']'
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76960
00:21:35.710    16:31:57 ublk_recovery -- common/autotest_common.sh@959 -- # uname
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:35.710    16:31:57 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76960
00:21:35.710  killing process with pid 76960
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76960'
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76960
00:21:35.710   16:31:57 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76960
00:21:35.710  [2024-12-09 16:31:59.076279] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:21:35.710  [2024-12-09 16:31:59.076353] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:21:35.710  
00:21:35.710  real	1m5.964s
00:21:35.710  user	1m52.414s
00:21:35.710  sys	0m36.867s
00:21:35.710   16:32:00 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:35.710  ************************************
00:21:35.710  END TEST ublk_recovery
00:21:35.710   16:32:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:21:35.710  ************************************
00:21:35.710   16:32:00  -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]]
00:21:35.710   16:32:00  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:21:35.710   16:32:00  -- spdk/autotest.sh@260 -- # timing_exit lib
00:21:35.710   16:32:00  -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:35.710   16:32:00  -- common/autotest_common.sh@10 -- # set +x
00:21:35.710   16:32:00  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:21:35.710   16:32:00  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:21:35.710   16:32:00  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:21:35.710   16:32:00  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:21:35.710   16:32:00  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:21:35.710   16:32:00  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:21:35.710   16:32:00  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:21:35.710   16:32:00  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:21:35.710   16:32:00  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:21:35.710   16:32:00  -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']'
00:21:35.710   16:32:00  -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh
00:21:35.710   16:32:00  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:21:35.710   16:32:00  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:35.710   16:32:00  -- common/autotest_common.sh@10 -- # set +x
00:21:35.710  ************************************
00:21:35.710  START TEST ftl
00:21:35.710  ************************************
00:21:35.710   16:32:00 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh
00:21:35.710  * Looking for test storage...
00:21:35.710  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:21:35.710    16:32:00 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:35.710     16:32:00 ftl -- common/autotest_common.sh@1711 -- # lcov --version
00:21:35.710     16:32:00 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:35.710    16:32:00 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:35.710    16:32:00 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:35.710    16:32:00 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:35.710    16:32:00 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:35.710    16:32:00 ftl -- scripts/common.sh@336 -- # IFS=.-:
00:21:35.710    16:32:00 ftl -- scripts/common.sh@336 -- # read -ra ver1
00:21:35.710    16:32:00 ftl -- scripts/common.sh@337 -- # IFS=.-:
00:21:35.710    16:32:00 ftl -- scripts/common.sh@337 -- # read -ra ver2
00:21:35.710    16:32:00 ftl -- scripts/common.sh@338 -- # local 'op=<'
00:21:35.710    16:32:00 ftl -- scripts/common.sh@340 -- # ver1_l=2
00:21:35.710    16:32:00 ftl -- scripts/common.sh@341 -- # ver2_l=1
00:21:35.710    16:32:00 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:35.710    16:32:00 ftl -- scripts/common.sh@344 -- # case "$op" in
00:21:35.710    16:32:00 ftl -- scripts/common.sh@345 -- # : 1
00:21:35.710    16:32:00 ftl -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:35.710    16:32:00 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:35.710     16:32:00 ftl -- scripts/common.sh@365 -- # decimal 1
00:21:35.710     16:32:00 ftl -- scripts/common.sh@353 -- # local d=1
00:21:35.710     16:32:00 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:35.710     16:32:00 ftl -- scripts/common.sh@355 -- # echo 1
00:21:35.710    16:32:00 ftl -- scripts/common.sh@365 -- # ver1[v]=1
00:21:35.710     16:32:00 ftl -- scripts/common.sh@366 -- # decimal 2
00:21:35.710     16:32:00 ftl -- scripts/common.sh@353 -- # local d=2
00:21:35.710     16:32:00 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:35.710     16:32:00 ftl -- scripts/common.sh@355 -- # echo 2
00:21:35.710    16:32:00 ftl -- scripts/common.sh@366 -- # ver2[v]=2
00:21:35.710    16:32:00 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:35.710    16:32:00 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:35.710    16:32:00 ftl -- scripts/common.sh@368 -- # return 0
00:21:35.710    16:32:00 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:35.710    16:32:00 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:35.710  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:35.710  		--rc genhtml_branch_coverage=1
00:21:35.710  		--rc genhtml_function_coverage=1
00:21:35.710  		--rc genhtml_legend=1
00:21:35.710  		--rc geninfo_all_blocks=1
00:21:35.710  		--rc geninfo_unexecuted_blocks=1
00:21:35.710  		
00:21:35.710  		'
00:21:35.710    16:32:00 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:35.710  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:35.710  		--rc genhtml_branch_coverage=1
00:21:35.710  		--rc genhtml_function_coverage=1
00:21:35.710  		--rc genhtml_legend=1
00:21:35.710  		--rc geninfo_all_blocks=1
00:21:35.710  		--rc geninfo_unexecuted_blocks=1
00:21:35.710  		
00:21:35.710  		'
00:21:35.710    16:32:00 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:35.710  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:35.710  		--rc genhtml_branch_coverage=1
00:21:35.710  		--rc genhtml_function_coverage=1
00:21:35.710  		--rc genhtml_legend=1
00:21:35.710  		--rc geninfo_all_blocks=1
00:21:35.710  		--rc geninfo_unexecuted_blocks=1
00:21:35.710  		
00:21:35.710  		'
00:21:35.710    16:32:00 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:35.710  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:35.710  		--rc genhtml_branch_coverage=1
00:21:35.710  		--rc genhtml_function_coverage=1
00:21:35.710  		--rc genhtml_legend=1
00:21:35.710  		--rc geninfo_all_blocks=1
00:21:35.710  		--rc geninfo_unexecuted_blocks=1
00:21:35.710  		
00:21:35.710  		'
00:21:35.710   16:32:00 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:21:35.710      16:32:00 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh
00:21:35.710     16:32:00 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:21:35.710    16:32:00 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:21:35.710     16:32:00 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:21:35.710    16:32:00 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:21:35.710    16:32:00 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:21:35.710    16:32:00 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:21:35.710    16:32:00 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:21:35.710    16:32:00 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:35.710    16:32:00 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:35.710    16:32:00 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:21:35.710    16:32:00 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:21:35.710    16:32:00 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:21:35.710    16:32:00 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:21:35.710    16:32:00 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:21:35.710    16:32:00 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:21:35.710    16:32:00 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:35.710    16:32:00 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:35.710    16:32:00 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:21:35.710    16:32:00 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:21:35.710    16:32:00 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:21:35.710    16:32:00 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:21:35.710    16:32:00 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:21:35.710    16:32:00 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:21:35.710    16:32:00 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:21:35.710    16:32:00 ftl -- ftl/common.sh@23 -- # spdk_ini_pid=
00:21:35.710    16:32:00 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:21:35.710    16:32:00 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:21:35.710   16:32:00 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:21:35.710   16:32:00 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT
00:21:35.711   16:32:00 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED=
00:21:35.711   16:32:00 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED=
00:21:35.711   16:32:00 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE=
00:21:35.711   16:32:00 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:21:35.711  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:21:35.711  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:21:35.711  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:21:35.711  0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver
00:21:35.711  0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver
00:21:35.711   16:32:01 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77773
00:21:35.711   16:32:01 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc
00:21:35.711   16:32:01 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77773
00:21:35.711   16:32:01 ftl -- common/autotest_common.sh@835 -- # '[' -z 77773 ']'
00:21:35.711   16:32:01 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:35.711   16:32:01 ftl -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:35.711  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:35.711   16:32:01 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:35.711   16:32:01 ftl -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:35.711   16:32:01 ftl -- common/autotest_common.sh@10 -- # set +x
00:21:35.711  [2024-12-09 16:32:01.828200] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:21:35.711  [2024-12-09 16:32:01.828333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77773 ]
00:21:35.711  [2024-12-09 16:32:02.008781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:35.711  [2024-12-09 16:32:02.113807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:35.711   16:32:02 ftl -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:35.711   16:32:02 ftl -- common/autotest_common.sh@868 -- # return 0
00:21:35.711   16:32:02 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d
00:21:35.711   16:32:02 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init
00:21:35.711   16:32:03 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62
00:21:35.711    16:32:03 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720
00:21:35.711    16:32:04 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs
00:21:35.711    16:32:04 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address'
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@50 -- # break
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']'
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@59 -- # base_size=1310720
00:21:35.711    16:32:04 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs
00:21:35.711    16:32:04 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address'
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@63 -- # break
00:21:35.711   16:32:04 ftl -- ftl/ftl.sh@66 -- # killprocess 77773
00:21:35.711   16:32:04 ftl -- common/autotest_common.sh@954 -- # '[' -z 77773 ']'
00:21:35.711   16:32:04 ftl -- common/autotest_common.sh@958 -- # kill -0 77773
00:21:35.711    16:32:04 ftl -- common/autotest_common.sh@959 -- # uname
00:21:35.711   16:32:04 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:35.711    16:32:04 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77773
00:21:35.711   16:32:04 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:35.711   16:32:04 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:35.711  killing process with pid 77773
00:21:35.711   16:32:04 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77773'
00:21:35.711   16:32:04 ftl -- common/autotest_common.sh@973 -- # kill 77773
00:21:35.711   16:32:04 ftl -- common/autotest_common.sh@978 -- # wait 77773
00:21:38.246   16:32:06 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']'
00:21:38.246   16:32:06 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic
00:21:38.246   16:32:06 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:21:38.246   16:32:06 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:38.246   16:32:06 ftl -- common/autotest_common.sh@10 -- # set +x
00:21:38.246  ************************************
00:21:38.246  START TEST ftl_fio_basic
00:21:38.246  ************************************
00:21:38.246   16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic
00:21:38.246  * Looking for test storage...
00:21:38.246  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-:
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-:
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:21:38.246  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:38.246  		--rc genhtml_branch_coverage=1
00:21:38.246  		--rc genhtml_function_coverage=1
00:21:38.246  		--rc genhtml_legend=1
00:21:38.246  		--rc geninfo_all_blocks=1
00:21:38.246  		--rc geninfo_unexecuted_blocks=1
00:21:38.246  		
00:21:38.246  		'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:21:38.246  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:38.246  		--rc genhtml_branch_coverage=1
00:21:38.246  		--rc genhtml_function_coverage=1
00:21:38.246  		--rc genhtml_legend=1
00:21:38.246  		--rc geninfo_all_blocks=1
00:21:38.246  		--rc geninfo_unexecuted_blocks=1
00:21:38.246  		
00:21:38.246  		'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:21:38.246  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:38.246  		--rc genhtml_branch_coverage=1
00:21:38.246  		--rc genhtml_function_coverage=1
00:21:38.246  		--rc genhtml_legend=1
00:21:38.246  		--rc geninfo_all_blocks=1
00:21:38.246  		--rc geninfo_unexecuted_blocks=1
00:21:38.246  		
00:21:38.246  		'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:21:38.246  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:38.246  		--rc genhtml_branch_coverage=1
00:21:38.246  		--rc genhtml_function_coverage=1
00:21:38.246  		--rc genhtml_legend=1
00:21:38.246  		--rc geninfo_all_blocks=1
00:21:38.246  		--rc geninfo_unexecuted_blocks=1
00:21:38.246  		
00:21:38.246  		'
00:21:38.246   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:21:38.246      16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:21:38.246     16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid=
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:21:38.246    16:32:07 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:21:38.246   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite
00:21:38.246   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128'
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap'
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght'
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128'
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid=
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]]
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']'
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77923
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77923
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77923 ']'
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:38.247  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:38.247   16:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:21:38.247  [2024-12-09 16:32:07.369416] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:21:38.247  [2024-12-09 16:32:07.369535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77923 ]
00:21:38.505  [2024-12-09 16:32:07.550556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:21:38.505  [2024-12-09 16:32:07.661098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:21:38.505  [2024-12-09 16:32:07.661234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:38.505  [2024-12-09 16:32:07.661266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:21:39.442   16:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:39.442   16:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0
00:21:39.442    16:32:08 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:21:39.442    16:32:08 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0
00:21:39.442    16:32:08 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:21:39.442    16:32:08 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424
00:21:39.442    16:32:08 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev
00:21:39.442     16:32:08 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:21:39.701    16:32:08 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:21:39.701    16:32:08 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size
00:21:39.701     16:32:08 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:21:39.701     16:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:21:39.701     16:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:21:39.701     16:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:21:39.701     16:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:21:39.701      16:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:21:39.960     16:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:21:39.960    {
00:21:39.960      "name": "nvme0n1",
00:21:39.960      "aliases": [
00:21:39.960        "fc8f9310-2947-46eb-b145-97f6a20dcc0d"
00:21:39.960      ],
00:21:39.960      "product_name": "NVMe disk",
00:21:39.960      "block_size": 4096,
00:21:39.960      "num_blocks": 1310720,
00:21:39.960      "uuid": "fc8f9310-2947-46eb-b145-97f6a20dcc0d",
00:21:39.960      "numa_id": -1,
00:21:39.960      "assigned_rate_limits": {
00:21:39.960        "rw_ios_per_sec": 0,
00:21:39.960        "rw_mbytes_per_sec": 0,
00:21:39.960        "r_mbytes_per_sec": 0,
00:21:39.960        "w_mbytes_per_sec": 0
00:21:39.960      },
00:21:39.960      "claimed": false,
00:21:39.960      "zoned": false,
00:21:39.960      "supported_io_types": {
00:21:39.960        "read": true,
00:21:39.960        "write": true,
00:21:39.960        "unmap": true,
00:21:39.960        "flush": true,
00:21:39.960        "reset": true,
00:21:39.960        "nvme_admin": true,
00:21:39.960        "nvme_io": true,
00:21:39.960        "nvme_io_md": false,
00:21:39.960        "write_zeroes": true,
00:21:39.960        "zcopy": false,
00:21:39.960        "get_zone_info": false,
00:21:39.960        "zone_management": false,
00:21:39.960        "zone_append": false,
00:21:39.960        "compare": true,
00:21:39.960        "compare_and_write": false,
00:21:39.960        "abort": true,
00:21:39.960        "seek_hole": false,
00:21:39.960        "seek_data": false,
00:21:39.960        "copy": true,
00:21:39.960        "nvme_iov_md": false
00:21:39.960      },
00:21:39.960      "driver_specific": {
00:21:39.960        "nvme": [
00:21:39.960          {
00:21:39.960            "pci_address": "0000:00:11.0",
00:21:39.960            "trid": {
00:21:39.960              "trtype": "PCIe",
00:21:39.960              "traddr": "0000:00:11.0"
00:21:39.960            },
00:21:39.960            "ctrlr_data": {
00:21:39.960              "cntlid": 0,
00:21:39.960              "vendor_id": "0x1b36",
00:21:39.960              "model_number": "QEMU NVMe Ctrl",
00:21:39.960              "serial_number": "12341",
00:21:39.960              "firmware_revision": "8.0.0",
00:21:39.960              "subnqn": "nqn.2019-08.org.qemu:12341",
00:21:39.960              "oacs": {
00:21:39.960                "security": 0,
00:21:39.960                "format": 1,
00:21:39.960                "firmware": 0,
00:21:39.960                "ns_manage": 1
00:21:39.960              },
00:21:39.960              "multi_ctrlr": false,
00:21:39.960              "ana_reporting": false
00:21:39.960            },
00:21:39.960            "vs": {
00:21:39.960              "nvme_version": "1.4"
00:21:39.960            },
00:21:39.960            "ns_data": {
00:21:39.960              "id": 1,
00:21:39.960              "can_share": false
00:21:39.960            }
00:21:39.960          }
00:21:39.960        ],
00:21:39.960        "mp_policy": "active_passive"
00:21:39.960      }
00:21:39.960    }
00:21:39.960  ]'
00:21:39.960      16:32:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:21:39.960     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:21:39.960      16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:21:39.960     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720
00:21:39.960     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:21:39.960     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120
00:21:39.960    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120
00:21:39.960    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:21:39.960    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols
00:21:39.960     16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:21:39.960     16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:21:40.219    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores=
00:21:40.219     16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:21:40.478    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=b58bf2b2-5374-43e5-868f-b9ab272a8d83
00:21:40.478    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b58bf2b2-5374-43e5-868f-b9ab272a8d83
00:21:40.478   16:32:09 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:40.478    16:32:09 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:40.478    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0
00:21:40.478    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:21:40.478    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:40.478    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size=
00:21:40.478     16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:40.478     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:40.478     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:21:40.478     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:21:40.478     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:21:40.478      16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:40.736     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:21:40.736    {
00:21:40.736      "name": "fdff9d77-4032-4160-aee5-71fa17f2f96d",
00:21:40.736      "aliases": [
00:21:40.736        "lvs/nvme0n1p0"
00:21:40.736      ],
00:21:40.736      "product_name": "Logical Volume",
00:21:40.736      "block_size": 4096,
00:21:40.736      "num_blocks": 26476544,
00:21:40.736      "uuid": "fdff9d77-4032-4160-aee5-71fa17f2f96d",
00:21:40.736      "assigned_rate_limits": {
00:21:40.736        "rw_ios_per_sec": 0,
00:21:40.736        "rw_mbytes_per_sec": 0,
00:21:40.736        "r_mbytes_per_sec": 0,
00:21:40.736        "w_mbytes_per_sec": 0
00:21:40.736      },
00:21:40.736      "claimed": false,
00:21:40.736      "zoned": false,
00:21:40.736      "supported_io_types": {
00:21:40.736        "read": true,
00:21:40.736        "write": true,
00:21:40.736        "unmap": true,
00:21:40.736        "flush": false,
00:21:40.736        "reset": true,
00:21:40.736        "nvme_admin": false,
00:21:40.736        "nvme_io": false,
00:21:40.736        "nvme_io_md": false,
00:21:40.736        "write_zeroes": true,
00:21:40.736        "zcopy": false,
00:21:40.736        "get_zone_info": false,
00:21:40.736        "zone_management": false,
00:21:40.736        "zone_append": false,
00:21:40.736        "compare": false,
00:21:40.736        "compare_and_write": false,
00:21:40.736        "abort": false,
00:21:40.736        "seek_hole": true,
00:21:40.736        "seek_data": true,
00:21:40.736        "copy": false,
00:21:40.736        "nvme_iov_md": false
00:21:40.736      },
00:21:40.736      "driver_specific": {
00:21:40.736        "lvol": {
00:21:40.736          "lvol_store_uuid": "b58bf2b2-5374-43e5-868f-b9ab272a8d83",
00:21:40.736          "base_bdev": "nvme0n1",
00:21:40.736          "thin_provision": true,
00:21:40.736          "num_allocated_clusters": 0,
00:21:40.736          "snapshot": false,
00:21:40.736          "clone": false,
00:21:40.736          "esnap_clone": false
00:21:40.736        }
00:21:40.736      }
00:21:40.736    }
00:21:40.736  ]'
00:21:40.736      16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:21:40.736     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:21:40.736      16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:21:40.995     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544
00:21:40.995     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:21:40.995     16:32:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424
00:21:40.995    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171
00:21:40.995    16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev
00:21:40.995     16:32:09 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:21:41.253    16:32:10 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:21:41.253    16:32:10 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]]
00:21:41.253     16:32:10 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:41.253     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:41.253     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:21:41.253     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:21:41.253     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:21:41.253      16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:41.253     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:21:41.253    {
00:21:41.253      "name": "fdff9d77-4032-4160-aee5-71fa17f2f96d",
00:21:41.253      "aliases": [
00:21:41.253        "lvs/nvme0n1p0"
00:21:41.253      ],
00:21:41.253      "product_name": "Logical Volume",
00:21:41.253      "block_size": 4096,
00:21:41.253      "num_blocks": 26476544,
00:21:41.253      "uuid": "fdff9d77-4032-4160-aee5-71fa17f2f96d",
00:21:41.253      "assigned_rate_limits": {
00:21:41.253        "rw_ios_per_sec": 0,
00:21:41.253        "rw_mbytes_per_sec": 0,
00:21:41.253        "r_mbytes_per_sec": 0,
00:21:41.253        "w_mbytes_per_sec": 0
00:21:41.253      },
00:21:41.253      "claimed": false,
00:21:41.253      "zoned": false,
00:21:41.253      "supported_io_types": {
00:21:41.253        "read": true,
00:21:41.253        "write": true,
00:21:41.253        "unmap": true,
00:21:41.253        "flush": false,
00:21:41.253        "reset": true,
00:21:41.253        "nvme_admin": false,
00:21:41.253        "nvme_io": false,
00:21:41.253        "nvme_io_md": false,
00:21:41.253        "write_zeroes": true,
00:21:41.253        "zcopy": false,
00:21:41.253        "get_zone_info": false,
00:21:41.253        "zone_management": false,
00:21:41.253        "zone_append": false,
00:21:41.253        "compare": false,
00:21:41.253        "compare_and_write": false,
00:21:41.253        "abort": false,
00:21:41.253        "seek_hole": true,
00:21:41.253        "seek_data": true,
00:21:41.253        "copy": false,
00:21:41.253        "nvme_iov_md": false
00:21:41.253      },
00:21:41.253      "driver_specific": {
00:21:41.253        "lvol": {
00:21:41.253          "lvol_store_uuid": "b58bf2b2-5374-43e5-868f-b9ab272a8d83",
00:21:41.253          "base_bdev": "nvme0n1",
00:21:41.253          "thin_provision": true,
00:21:41.253          "num_allocated_clusters": 0,
00:21:41.253          "snapshot": false,
00:21:41.253          "clone": false,
00:21:41.253          "esnap_clone": false
00:21:41.253        }
00:21:41.253      }
00:21:41.253    }
00:21:41.253  ]'
00:21:41.253      16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:21:41.253     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:21:41.253      16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:21:41.511     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544
00:21:41.511     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:21:41.511     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424
00:21:41.511    16:32:10 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171
00:21:41.511    16:32:10 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:21:41.511   16:32:10 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0
00:21:41.511   16:32:10 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60
00:21:41.511   16:32:10 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']'
00:21:41.511  /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected
00:21:41.511    16:32:10 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:41.511    16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:41.511    16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:21:41.511    16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:21:41.511    16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:21:41.511     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdff9d77-4032-4160-aee5-71fa17f2f96d
00:21:41.769    16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:21:41.769    {
00:21:41.769      "name": "fdff9d77-4032-4160-aee5-71fa17f2f96d",
00:21:41.769      "aliases": [
00:21:41.769        "lvs/nvme0n1p0"
00:21:41.769      ],
00:21:41.769      "product_name": "Logical Volume",
00:21:41.769      "block_size": 4096,
00:21:41.769      "num_blocks": 26476544,
00:21:41.769      "uuid": "fdff9d77-4032-4160-aee5-71fa17f2f96d",
00:21:41.769      "assigned_rate_limits": {
00:21:41.769        "rw_ios_per_sec": 0,
00:21:41.769        "rw_mbytes_per_sec": 0,
00:21:41.769        "r_mbytes_per_sec": 0,
00:21:41.769        "w_mbytes_per_sec": 0
00:21:41.770      },
00:21:41.770      "claimed": false,
00:21:41.770      "zoned": false,
00:21:41.770      "supported_io_types": {
00:21:41.770        "read": true,
00:21:41.770        "write": true,
00:21:41.770        "unmap": true,
00:21:41.770        "flush": false,
00:21:41.770        "reset": true,
00:21:41.770        "nvme_admin": false,
00:21:41.770        "nvme_io": false,
00:21:41.770        "nvme_io_md": false,
00:21:41.770        "write_zeroes": true,
00:21:41.770        "zcopy": false,
00:21:41.770        "get_zone_info": false,
00:21:41.770        "zone_management": false,
00:21:41.770        "zone_append": false,
00:21:41.770        "compare": false,
00:21:41.770        "compare_and_write": false,
00:21:41.770        "abort": false,
00:21:41.770        "seek_hole": true,
00:21:41.770        "seek_data": true,
00:21:41.770        "copy": false,
00:21:41.770        "nvme_iov_md": false
00:21:41.770      },
00:21:41.770      "driver_specific": {
00:21:41.770        "lvol": {
00:21:41.770          "lvol_store_uuid": "b58bf2b2-5374-43e5-868f-b9ab272a8d83",
00:21:41.770          "base_bdev": "nvme0n1",
00:21:41.770          "thin_provision": true,
00:21:41.770          "num_allocated_clusters": 0,
00:21:41.770          "snapshot": false,
00:21:41.770          "clone": false,
00:21:41.770          "esnap_clone": false
00:21:41.770        }
00:21:41.770      }
00:21:41.770    }
00:21:41.770  ]'
00:21:41.770     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:21:41.770    16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:21:41.770     16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:21:41.770    16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544
00:21:41.770    16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:21:41.770    16:32:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424
00:21:41.770   16:32:10 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60
00:21:41.770   16:32:10 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']'
00:21:41.770   16:32:10 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fdff9d77-4032-4160-aee5-71fa17f2f96d -c nvc0n1p0 --l2p_dram_limit 60
00:21:42.030  [2024-12-09 16:32:11.157273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.030  [2024-12-09 16:32:11.157330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:21:42.030  [2024-12-09 16:32:11.157352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:21:42.030  [2024-12-09 16:32:11.157366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.030  [2024-12-09 16:32:11.157459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.030  [2024-12-09 16:32:11.157478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:21:42.030  [2024-12-09 16:32:11.157498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:21:42.030  [2024-12-09 16:32:11.157511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.030  [2024-12-09 16:32:11.157573] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:21:42.030  [2024-12-09 16:32:11.163041] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:21:42.030  [2024-12-09 16:32:11.163110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.030  [2024-12-09 16:32:11.163125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:21:42.030  [2024-12-09 16:32:11.163143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.545 ms
00:21:42.030  [2024-12-09 16:32:11.163156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.030  [2024-12-09 16:32:11.163274] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ac49845c-d52d-40e4-ac02-795f51843c8b
00:21:42.030  [2024-12-09 16:32:11.165020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.030  [2024-12-09 16:32:11.165068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:21:42.030  [2024-12-09 16:32:11.165085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.041 ms
00:21:42.030  [2024-12-09 16:32:11.165102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.030  [2024-12-09 16:32:11.172830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.030  [2024-12-09 16:32:11.172870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:21:42.030  [2024-12-09 16:32:11.172902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.563 ms
00:21:42.030  [2024-12-09 16:32:11.172929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.030  [2024-12-09 16:32:11.173069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.030  [2024-12-09 16:32:11.173090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:21:42.030  [2024-12-09 16:32:11.173105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.093 ms
00:21:42.030  [2024-12-09 16:32:11.173126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.030  [2024-12-09 16:32:11.173221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.030  [2024-12-09 16:32:11.173267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:21:42.030  [2024-12-09 16:32:11.173281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.027 ms
00:21:42.030  [2024-12-09 16:32:11.173299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.030  [2024-12-09 16:32:11.173365] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:21:42.030  [2024-12-09 16:32:11.178308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.030  [2024-12-09 16:32:11.178346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:21:42.030  [2024-12-09 16:32:11.178382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.965 ms
00:21:42.030  [2024-12-09 16:32:11.178400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.030  [2024-12-09 16:32:11.178469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.030  [2024-12-09 16:32:11.178485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:21:42.030  [2024-12-09 16:32:11.178500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:21:42.030  [2024-12-09 16:32:11.178513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.030  [2024-12-09 16:32:11.178592] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:21:42.030  [2024-12-09 16:32:11.178764] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:21:42.030  [2024-12-09 16:32:11.178790] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:21:42.030  [2024-12-09 16:32:11.178807] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:21:42.030  [2024-12-09 16:32:11.178827] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:21:42.030  [2024-12-09 16:32:11.178843] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:21:42.030  [2024-12-09 16:32:11.178861] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:21:42.030  [2024-12-09 16:32:11.178876] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:21:42.030  [2024-12-09 16:32:11.178892] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:21:42.030  [2024-12-09 16:32:11.178905] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:21:42.030  [2024-12-09 16:32:11.178935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.030  [2024-12-09 16:32:11.178951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:21:42.030  [2024-12-09 16:32:11.178967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.349 ms
00:21:42.030  [2024-12-09 16:32:11.178980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.030  [2024-12-09 16:32:11.179082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.030  [2024-12-09 16:32:11.179108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:21:42.030  [2024-12-09 16:32:11.179124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.058 ms
00:21:42.030  [2024-12-09 16:32:11.179136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.030  [2024-12-09 16:32:11.179287] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:21:42.030  [2024-12-09 16:32:11.179302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:21:42.030  [2024-12-09 16:32:11.179323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:21:42.030  [2024-12-09 16:32:11.179335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:21:42.030  [2024-12-09 16:32:11.179365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:21:42.030  [2024-12-09 16:32:11.179392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:21:42.030  [2024-12-09 16:32:11.179410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:21:42.030  [2024-12-09 16:32:11.179438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:21:42.030  [2024-12-09 16:32:11.179450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:21:42.030  [2024-12-09 16:32:11.179466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:21:42.030  [2024-12-09 16:32:11.179478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:21:42.030  [2024-12-09 16:32:11.179492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:21:42.030  [2024-12-09 16:32:11.179504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:21:42.030  [2024-12-09 16:32:11.179533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:21:42.030  [2024-12-09 16:32:11.179548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:21:42.030  [2024-12-09 16:32:11.179576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:42.030  [2024-12-09 16:32:11.179603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:21:42.030  [2024-12-09 16:32:11.179614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:42.030  [2024-12-09 16:32:11.179641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:21:42.030  [2024-12-09 16:32:11.179655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:42.030  [2024-12-09 16:32:11.179682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:21:42.030  [2024-12-09 16:32:11.179694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:42.030  [2024-12-09 16:32:11.179720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:21:42.030  [2024-12-09 16:32:11.179737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:21:42.030  [2024-12-09 16:32:11.179787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:21:42.030  [2024-12-09 16:32:11.179800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:21:42.030  [2024-12-09 16:32:11.179814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:21:42.030  [2024-12-09 16:32:11.179826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:21:42.030  [2024-12-09 16:32:11.179841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:21:42.030  [2024-12-09 16:32:11.179853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:21:42.030  [2024-12-09 16:32:11.179879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:21:42.030  [2024-12-09 16:32:11.179907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:42.030  [2024-12-09 16:32:11.179920] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:21:42.031  [2024-12-09 16:32:11.179935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:21:42.031  [2024-12-09 16:32:11.179947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:21:42.031  [2024-12-09 16:32:11.179962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:42.031  [2024-12-09 16:32:11.179976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:21:42.031  [2024-12-09 16:32:11.179993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:21:42.031  [2024-12-09 16:32:11.180005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:21:42.031  [2024-12-09 16:32:11.180020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:21:42.031  [2024-12-09 16:32:11.180032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:21:42.031  [2024-12-09 16:32:11.180047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:21:42.031  [2024-12-09 16:32:11.180075] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:21:42.031  [2024-12-09 16:32:11.180098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:21:42.031  [2024-12-09 16:32:11.180112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:21:42.031  [2024-12-09 16:32:11.180127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:21:42.031  [2024-12-09 16:32:11.180141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:21:42.031  [2024-12-09 16:32:11.180155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:21:42.031  [2024-12-09 16:32:11.180169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:21:42.031  [2024-12-09 16:32:11.180186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:21:42.031  [2024-12-09 16:32:11.180198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:21:42.031  [2024-12-09 16:32:11.180214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:21:42.031  [2024-12-09 16:32:11.180226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:21:42.031  [2024-12-09 16:32:11.180244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:21:42.031  [2024-12-09 16:32:11.180257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:21:42.031  [2024-12-09 16:32:11.180275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:21:42.031  [2024-12-09 16:32:11.180288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:21:42.031  [2024-12-09 16:32:11.180303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:21:42.031  [2024-12-09 16:32:11.180315] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:21:42.031  [2024-12-09 16:32:11.180333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:21:42.031  [2024-12-09 16:32:11.180349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:21:42.031  [2024-12-09 16:32:11.180365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:21:42.031  [2024-12-09 16:32:11.180378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:21:42.031  [2024-12-09 16:32:11.180393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:21:42.031  [2024-12-09 16:32:11.180409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:42.031  [2024-12-09 16:32:11.180424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:21:42.031  [2024-12-09 16:32:11.180437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.184 ms
00:21:42.031  [2024-12-09 16:32:11.180453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:42.031  [2024-12-09 16:32:11.180564] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:21:42.031  [2024-12-09 16:32:11.180585] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:21:47.299  [2024-12-09 16:32:15.815884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:15.815993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:21:47.299  [2024-12-09 16:32:15.816014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4642.844 ms
00:21:47.299  [2024-12-09 16:32:15.816030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:15.862731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:15.862812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:21:47.299  [2024-12-09 16:32:15.862831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 46.366 ms
00:21:47.299  [2024-12-09 16:32:15.862847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:15.863018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:15.863038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:21:47.299  [2024-12-09 16:32:15.863053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.057 ms
00:21:47.299  [2024-12-09 16:32:15.863071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:15.920488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:15.920546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:21:47.299  [2024-12-09 16:32:15.920567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 57.397 ms
00:21:47.299  [2024-12-09 16:32:15.920601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:15.920669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:15.920686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:21:47.299  [2024-12-09 16:32:15.920700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:21:47.299  [2024-12-09 16:32:15.920715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:15.921260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:15.921292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:21:47.299  [2024-12-09 16:32:15.921306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.452 ms
00:21:47.299  [2024-12-09 16:32:15.921326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:15.921470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:15.921489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:21:47.299  [2024-12-09 16:32:15.921502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.097 ms
00:21:47.299  [2024-12-09 16:32:15.921520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:15.942648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:15.942700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:21:47.299  [2024-12-09 16:32:15.942732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 21.115 ms
00:21:47.299  [2024-12-09 16:32:15.942748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:15.954117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:21:47.299  [2024-12-09 16:32:15.970701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:15.970755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:21:47.299  [2024-12-09 16:32:15.970796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.850 ms
00:21:47.299  [2024-12-09 16:32:15.970808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:16.102103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:16.102169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:21:47.299  [2024-12-09 16:32:16.102213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 131.432 ms
00:21:47.299  [2024-12-09 16:32:16.102226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:16.102464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:16.102483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:21:47.299  [2024-12-09 16:32:16.102503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.182 ms
00:21:47.299  [2024-12-09 16:32:16.102515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:16.138516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:16.138568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:21:47.299  [2024-12-09 16:32:16.138605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.955 ms
00:21:47.299  [2024-12-09 16:32:16.138618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:16.173391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:16.173436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:21:47.299  [2024-12-09 16:32:16.173473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.767 ms
00:21:47.299  [2024-12-09 16:32:16.173484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:16.174208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:16.174240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:21:47.299  [2024-12-09 16:32:16.174258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.670 ms
00:21:47.299  [2024-12-09 16:32:16.174271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:16.295750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:16.295812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:21:47.299  [2024-12-09 16:32:16.295855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 121.568 ms
00:21:47.299  [2024-12-09 16:32:16.295872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.299  [2024-12-09 16:32:16.333527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.299  [2024-12-09 16:32:16.333579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:21:47.299  [2024-12-09 16:32:16.333600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 37.585 ms
00:21:47.299  [2024-12-09 16:32:16.333613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.300  [2024-12-09 16:32:16.370346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.300  [2024-12-09 16:32:16.370395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:21:47.300  [2024-12-09 16:32:16.370415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.734 ms
00:21:47.300  [2024-12-09 16:32:16.370443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.300  [2024-12-09 16:32:16.407332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.300  [2024-12-09 16:32:16.407379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:21:47.300  [2024-12-09 16:32:16.407400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.892 ms
00:21:47.300  [2024-12-09 16:32:16.407414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.300  [2024-12-09 16:32:16.407484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.300  [2024-12-09 16:32:16.407498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:21:47.300  [2024-12-09 16:32:16.407522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:21:47.300  [2024-12-09 16:32:16.407534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.300  [2024-12-09 16:32:16.407682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:47.300  [2024-12-09 16:32:16.407697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:21:47.300  [2024-12-09 16:32:16.407714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.033 ms
00:21:47.300  [2024-12-09 16:32:16.407726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:47.300  [2024-12-09 16:32:16.409118] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5259.876 ms, result 0
00:21:47.300  {
00:21:47.300    "name": "ftl0",
00:21:47.300    "uuid": "ac49845c-d52d-40e4-ac02-795f51843c8b"
00:21:47.300  }
00:21:47.300   16:32:16 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0
00:21:47.300   16:32:16 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0
00:21:47.300   16:32:16 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:21:47.300   16:32:16 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i
00:21:47.300   16:32:16 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:21:47.300   16:32:16 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:21:47.300   16:32:16 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:21:47.558   16:32:16 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000
00:21:47.817  [
00:21:47.817    {
00:21:47.817      "name": "ftl0",
00:21:47.817      "aliases": [
00:21:47.817        "ac49845c-d52d-40e4-ac02-795f51843c8b"
00:21:47.817      ],
00:21:47.817      "product_name": "FTL disk",
00:21:47.817      "block_size": 4096,
00:21:47.817      "num_blocks": 20971520,
00:21:47.817      "uuid": "ac49845c-d52d-40e4-ac02-795f51843c8b",
00:21:47.817      "assigned_rate_limits": {
00:21:47.817        "rw_ios_per_sec": 0,
00:21:47.817        "rw_mbytes_per_sec": 0,
00:21:47.817        "r_mbytes_per_sec": 0,
00:21:47.817        "w_mbytes_per_sec": 0
00:21:47.817      },
00:21:47.817      "claimed": false,
00:21:47.817      "zoned": false,
00:21:47.817      "supported_io_types": {
00:21:47.817        "read": true,
00:21:47.817        "write": true,
00:21:47.817        "unmap": true,
00:21:47.817        "flush": true,
00:21:47.817        "reset": false,
00:21:47.817        "nvme_admin": false,
00:21:47.817        "nvme_io": false,
00:21:47.817        "nvme_io_md": false,
00:21:47.817        "write_zeroes": true,
00:21:47.817        "zcopy": false,
00:21:47.817        "get_zone_info": false,
00:21:47.817        "zone_management": false,
00:21:47.817        "zone_append": false,
00:21:47.817        "compare": false,
00:21:47.817        "compare_and_write": false,
00:21:47.817        "abort": false,
00:21:47.817        "seek_hole": false,
00:21:47.817        "seek_data": false,
00:21:47.817        "copy": false,
00:21:47.817        "nvme_iov_md": false
00:21:47.817      },
00:21:47.817      "driver_specific": {
00:21:47.817        "ftl": {
00:21:47.817          "base_bdev": "fdff9d77-4032-4160-aee5-71fa17f2f96d",
00:21:47.817          "cache": "nvc0n1p0"
00:21:47.817        }
00:21:47.817      }
00:21:47.817    }
00:21:47.817  ]
00:21:47.817   16:32:16 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0
00:21:47.817   16:32:16 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": ['
00:21:47.817   16:32:16 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:48.076   16:32:17 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}'
00:21:48.076   16:32:17 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:21:48.337  [2024-12-09 16:32:17.257892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.257966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:21:48.337  [2024-12-09 16:32:17.257984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:21:48.337  [2024-12-09 16:32:17.258016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.258080] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:21:48.337  [2024-12-09 16:32:17.262433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.262476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:21:48.337  [2024-12-09 16:32:17.262495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.323 ms
00:21:48.337  [2024-12-09 16:32:17.262508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.263357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.263387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:21:48.337  [2024-12-09 16:32:17.263405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.785 ms
00:21:48.337  [2024-12-09 16:32:17.263418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.265954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.265985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:21:48.337  [2024-12-09 16:32:17.266020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.487 ms
00:21:48.337  [2024-12-09 16:32:17.266033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.271029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.271072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:21:48.337  [2024-12-09 16:32:17.271106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.938 ms
00:21:48.337  [2024-12-09 16:32:17.271118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.306760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.306811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:21:48.337  [2024-12-09 16:32:17.306868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.553 ms
00:21:48.337  [2024-12-09 16:32:17.306880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.332048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.332107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:21:48.337  [2024-12-09 16:32:17.332134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.121 ms
00:21:48.337  [2024-12-09 16:32:17.332146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.332411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.332444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:21:48.337  [2024-12-09 16:32:17.332461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.192 ms
00:21:48.337  [2024-12-09 16:32:17.332474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.367555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.367600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:21:48.337  [2024-12-09 16:32:17.367636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.085 ms
00:21:48.337  [2024-12-09 16:32:17.367647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.402661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.402702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:21:48.337  [2024-12-09 16:32:17.402722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.991 ms
00:21:48.337  [2024-12-09 16:32:17.402735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.439000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.439058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:21:48.337  [2024-12-09 16:32:17.439079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.239 ms
00:21:48.337  [2024-12-09 16:32:17.439091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.474511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.337  [2024-12-09 16:32:17.474559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:21:48.337  [2024-12-09 16:32:17.474579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.273 ms
00:21:48.337  [2024-12-09 16:32:17.474591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.337  [2024-12-09 16:32:17.474664] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:21:48.337  [2024-12-09 16:32:17.474683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.474998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.337  [2024-12-09 16:32:17.475416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.475991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:21:48.338  [2024-12-09 16:32:17.476217] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:21:48.338  [2024-12-09 16:32:17.476233] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         ac49845c-d52d-40e4-ac02-795f51843c8b
00:21:48.338  [2024-12-09 16:32:17.476246] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:21:48.338  [2024-12-09 16:32:17.476263] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:21:48.338  [2024-12-09 16:32:17.476275] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:21:48.338  [2024-12-09 16:32:17.476294] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:21:48.338  [2024-12-09 16:32:17.476307] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:21:48.338  [2024-12-09 16:32:17.476322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:21:48.338  [2024-12-09 16:32:17.476335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:21:48.338  [2024-12-09 16:32:17.476349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:21:48.338  [2024-12-09 16:32:17.476359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:21:48.338  [2024-12-09 16:32:17.476375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.338  [2024-12-09 16:32:17.476387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:21:48.338  [2024-12-09 16:32:17.476405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.716 ms
00:21:48.338  [2024-12-09 16:32:17.476418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.338  [2024-12-09 16:32:17.496743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.338  [2024-12-09 16:32:17.496791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:21:48.338  [2024-12-09 16:32:17.496826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.273 ms
00:21:48.338  [2024-12-09 16:32:17.496839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.338  [2024-12-09 16:32:17.497448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:48.338  [2024-12-09 16:32:17.497475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:21:48.338  [2024-12-09 16:32:17.497493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.550 ms
00:21:48.338  [2024-12-09 16:32:17.497505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.597  [2024-12-09 16:32:17.566997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.597  [2024-12-09 16:32:17.567042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:21:48.597  [2024-12-09 16:32:17.567078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.597  [2024-12-09 16:32:17.567091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.597  [2024-12-09 16:32:17.567181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.597  [2024-12-09 16:32:17.567195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:21:48.597  [2024-12-09 16:32:17.567211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.597  [2024-12-09 16:32:17.567223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.597  [2024-12-09 16:32:17.567357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.597  [2024-12-09 16:32:17.567377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:21:48.597  [2024-12-09 16:32:17.567394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.597  [2024-12-09 16:32:17.567406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.597  [2024-12-09 16:32:17.567456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.597  [2024-12-09 16:32:17.567469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:21:48.597  [2024-12-09 16:32:17.567484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.597  [2024-12-09 16:32:17.567496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.597  [2024-12-09 16:32:17.695395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.597  [2024-12-09 16:32:17.695480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:21:48.597  [2024-12-09 16:32:17.695501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.597  [2024-12-09 16:32:17.695514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.857  [2024-12-09 16:32:17.791131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.857  [2024-12-09 16:32:17.791213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:21:48.857  [2024-12-09 16:32:17.791234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.857  [2024-12-09 16:32:17.791247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.857  [2024-12-09 16:32:17.791406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.857  [2024-12-09 16:32:17.791420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:21:48.857  [2024-12-09 16:32:17.791441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.857  [2024-12-09 16:32:17.791453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.857  [2024-12-09 16:32:17.791573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.857  [2024-12-09 16:32:17.791587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:21:48.857  [2024-12-09 16:32:17.791603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.857  [2024-12-09 16:32:17.791615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.857  [2024-12-09 16:32:17.791778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.857  [2024-12-09 16:32:17.791794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:21:48.857  [2024-12-09 16:32:17.791811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.857  [2024-12-09 16:32:17.791826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.857  [2024-12-09 16:32:17.791934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.857  [2024-12-09 16:32:17.791954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:21:48.857  [2024-12-09 16:32:17.791972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.857  [2024-12-09 16:32:17.791984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.857  [2024-12-09 16:32:17.792050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.857  [2024-12-09 16:32:17.792063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:21:48.857  [2024-12-09 16:32:17.792079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.857  [2024-12-09 16:32:17.792095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.857  [2024-12-09 16:32:17.792177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:48.857  [2024-12-09 16:32:17.792191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:21:48.857  [2024-12-09 16:32:17.792207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:48.857  [2024-12-09 16:32:17.792218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:48.857  [2024-12-09 16:32:17.792464] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 535.410 ms, result 0
00:21:48.857  true
00:21:48.857   16:32:17 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77923
00:21:48.857   16:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77923 ']'
00:21:48.857   16:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77923
00:21:48.857    16:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname
00:21:48.857   16:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:48.857    16:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77923
00:21:48.857  killing process with pid 77923
00:21:48.857   16:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:48.857   16:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:48.857   16:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77923'
00:21:48.857   16:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77923
00:21:48.857   16:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77923
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests}
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib=
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:21:54.126    16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:21:54.126    16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:21:54.126    16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:21:54.126   16:32:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio
00:21:54.126  test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1
00:21:54.126  fio-3.35
00:21:54.126  Starting 1 thread
00:22:00.691  
00:22:00.691  test: (groupid=0, jobs=1): err= 0: pid=78150: Mon Dec  9 16:32:28 2024
00:22:00.691    read: IOPS=878, BW=58.4MiB/s (61.2MB/s)(255MiB/4362msec)
00:22:00.691      slat (usec): min=4, max=119, avg=10.65, stdev= 4.61
00:22:00.691      clat (usec): min=335, max=1046, avg=515.81, stdev=50.80
00:22:00.691       lat (usec): min=348, max=1051, avg=526.46, stdev=51.75
00:22:00.691      clat percentiles (usec):
00:22:00.691       |  1.00th=[  408],  5.00th=[  441], 10.00th=[  469], 20.00th=[  482],
00:22:00.691       | 30.00th=[  490], 40.00th=[  494], 50.00th=[  502], 60.00th=[  523],
00:22:00.691       | 70.00th=[  545], 80.00th=[  562], 90.00th=[  570], 95.00th=[  586],
00:22:00.691       | 99.00th=[  660], 99.50th=[  750], 99.90th=[  881], 99.95th=[  914],
00:22:00.691       | 99.99th=[ 1045]
00:22:00.691    write: IOPS=884, BW=58.8MiB/s (61.6MB/s)(256MiB/4358msec); 0 zone resets
00:22:00.691      slat (usec): min=15, max=137, avg=23.17, stdev= 5.92
00:22:00.691      clat (usec): min=419, max=1097, avg=573.83, stdev=65.69
00:22:00.691       lat (usec): min=437, max=1115, avg=597.00, stdev=66.04
00:22:00.691      clat percentiles (usec):
00:22:00.691       |  1.00th=[  457],  5.00th=[  494], 10.00th=[  506], 20.00th=[  523],
00:22:00.691       | 30.00th=[  553], 40.00th=[  570], 50.00th=[  578], 60.00th=[  586],
00:22:00.691       | 70.00th=[  586], 80.00th=[  594], 90.00th=[  619], 95.00th=[  652],
00:22:00.691       | 99.00th=[  881], 99.50th=[  947], 99.90th=[ 1029], 99.95th=[ 1090],
00:22:00.691       | 99.99th=[ 1106]
00:22:00.691     bw (  KiB/s): min=59024, max=63376, per=99.99%, avg=60163.00, stdev=1491.47, samples=8
00:22:00.691     iops        : min=  868, max=  932, avg=884.75, stdev=21.93, samples=8
00:22:00.691    lat (usec)   : 500=28.26%, 750=70.35%, 1000=1.27%
00:22:00.691    lat (msec)   : 2=0.12%
00:22:00.691    cpu          : usr=98.72%, sys=0.25%, ctx=8, majf=0, minf=1169
00:22:00.691    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:22:00.691       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:22:00.691       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:22:00.691       issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0
00:22:00.691       latency   : target=0, window=0, percentile=100.00%, depth=1
00:22:00.691  
00:22:00.691  Run status group 0 (all jobs):
00:22:00.691     READ: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=255MiB (267MB), run=4362-4362msec
00:22:00.691    WRITE: bw=58.8MiB/s (61.6MB/s), 58.8MiB/s-58.8MiB/s (61.6MB/s-61.6MB/s), io=256MiB (269MB), run=4358-4358msec
00:22:01.628  -----------------------------------------------------
00:22:01.628  Suppressions used:
00:22:01.628    count      bytes template
00:22:01.628        1          5 /usr/src/fio/parse.c
00:22:01.628        1          8 libtcmalloc_minimal.so
00:22:01.628        1        904 libcrypto.so
00:22:01.628  -----------------------------------------------------
00:22:01.628  
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests}
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib=
00:22:01.628   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:22:01.887    16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:22:01.887    16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan
00:22:01.887    16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:22:01.887   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:22:01.887   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:22:01.887   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break
00:22:01.887   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:22:01.887   16:32:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio
00:22:01.887  first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:22:01.887  second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:22:01.887  fio-3.35
00:22:01.887  Starting 2 threads
00:22:33.963  
00:22:33.963  first_half: (groupid=0, jobs=1): err= 0: pid=78264: Mon Dec  9 16:32:59 2024
00:22:33.963    read: IOPS=2413, BW=9655KiB/s (9886kB/s)(255MiB/27027msec)
00:22:33.963      slat (nsec): min=3448, max=54898, avg=9880.93, stdev=5129.36
00:22:33.963      clat (usec): min=693, max=372500, avg=40480.26, stdev=22041.07
00:22:33.963       lat (usec): min=709, max=372514, avg=40490.14, stdev=22041.56
00:22:33.963      clat percentiles (msec):
00:22:33.963       |  1.00th=[    8],  5.00th=[   33], 10.00th=[   33], 20.00th=[   34],
00:22:33.963       | 30.00th=[   36], 40.00th=[   37], 50.00th=[   37], 60.00th=[   37],
00:22:33.963       | 70.00th=[   38], 80.00th=[   40], 90.00th=[   46], 95.00th=[   59],
00:22:33.963       | 99.00th=[  167], 99.50th=[  199], 99.90th=[  226], 99.95th=[  321],
00:22:33.963       | 99.99th=[  363]
00:22:33.963    write: IOPS=2898, BW=11.3MiB/s (11.9MB/s)(256MiB/22614msec); 0 zone resets
00:22:33.963      slat (usec): min=4, max=3354, avg=11.12, stdev=26.21
00:22:33.963      clat (usec): min=432, max=124694, avg=12430.33, stdev=20286.93
00:22:33.963       lat (usec): min=445, max=124703, avg=12441.45, stdev=20287.28
00:22:33.963      clat percentiles (usec):
00:22:33.963       |  1.00th=[   881],  5.00th=[  1156], 10.00th=[  1352], 20.00th=[  1680],
00:22:33.963       | 30.00th=[  2343], 40.00th=[  4490], 50.00th=[  5997], 60.00th=[  7308],
00:22:33.963       | 70.00th=[  9372], 80.00th=[ 13829], 90.00th=[ 26346], 95.00th=[ 76022],
00:22:33.963       | 99.00th=[ 89654], 99.50th=[104334], 99.90th=[114820], 99.95th=[117965],
00:22:33.963       | 99.99th=[123208]
00:22:33.963     bw (  KiB/s): min=  872, max=42040, per=94.25%, avg=21850.25, stdev=12013.26, samples=24
00:22:33.963     iops        : min=  218, max=10510, avg=5462.50, stdev=3003.31, samples=24
00:22:33.963    lat (usec)   : 500=0.01%, 750=0.12%, 1000=1.02%
00:22:33.963    lat (msec)   : 2=12.30%, 4=5.55%, 10=17.34%, 20=8.86%, 50=48.07%
00:22:33.963    lat (msec)   : 100=5.17%, 250=1.52%, 500=0.04%
00:22:33.963    cpu          : usr=99.13%, sys=0.30%, ctx=42, majf=0, minf=5605
00:22:33.963    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:22:33.963       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:22:33.963       complete  : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1%
00:22:33.963       issued rwts: total=65234,65536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:22:33.963       latency   : target=0, window=0, percentile=100.00%, depth=128
00:22:33.963  second_half: (groupid=0, jobs=1): err= 0: pid=78265: Mon Dec  9 16:32:59 2024
00:22:33.963    read: IOPS=2397, BW=9589KiB/s (9819kB/s)(255MiB/27207msec)
00:22:33.963      slat (usec): min=3, max=113, avg= 9.09, stdev= 4.05
00:22:33.963      clat (usec): min=846, max=382705, avg=40073.47, stdev=23270.97
00:22:33.963       lat (usec): min=856, max=382715, avg=40082.56, stdev=23271.44
00:22:33.963      clat percentiles (msec):
00:22:33.963       |  1.00th=[   13],  5.00th=[   32], 10.00th=[   33], 20.00th=[   34],
00:22:33.963       | 30.00th=[   36], 40.00th=[   37], 50.00th=[   37], 60.00th=[   37],
00:22:33.964       | 70.00th=[   38], 80.00th=[   39], 90.00th=[   45], 95.00th=[   51],
00:22:33.964       | 99.00th=[  163], 99.50th=[  192], 99.90th=[  305], 99.95th=[  351],
00:22:33.964       | 99.99th=[  376]
00:22:33.964    write: IOPS=3277, BW=12.8MiB/s (13.4MB/s)(256MiB/19995msec); 0 zone resets
00:22:33.964      slat (usec): min=4, max=1048, avg=10.24, stdev= 7.31
00:22:33.964      clat (usec): min=473, max=125684, avg=13213.50, stdev=20895.98
00:22:33.964       lat (usec): min=484, max=125695, avg=13223.74, stdev=20896.31
00:22:33.964      clat percentiles (usec):
00:22:33.964       |  1.00th=[   881],  5.00th=[  1123], 10.00th=[  1336], 20.00th=[  1729],
00:22:33.964       | 30.00th=[  2507], 40.00th=[  4424], 50.00th=[  6128], 60.00th=[  8094],
00:22:33.964       | 70.00th=[ 10421], 80.00th=[ 14091], 90.00th=[ 36963], 95.00th=[ 76022],
00:22:33.964       | 99.00th=[ 91751], 99.50th=[103285], 99.90th=[119014], 99.95th=[123208],
00:22:33.964       | 99.99th=[124257]
00:22:33.964     bw (  KiB/s): min=  160, max=46680, per=90.52%, avg=20986.72, stdev=13520.06, samples=25
00:22:33.964     iops        : min=   40, max=11670, avg=5246.56, stdev=3379.95, samples=25
00:22:33.964    lat (usec)   : 500=0.01%, 750=0.12%, 1000=1.12%
00:22:33.964    lat (msec)   : 2=11.25%, 4=6.80%, 10=15.66%, 20=10.20%, 50=48.71%
00:22:33.964    lat (msec)   : 100=4.43%, 250=1.63%, 500=0.09%
00:22:33.964    cpu          : usr=99.14%, sys=0.24%, ctx=45, majf=0, minf=5512
00:22:33.964    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:22:33.964       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:22:33.964       complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:22:33.964       issued rwts: total=65223,65536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:22:33.964       latency   : target=0, window=0, percentile=100.00%, depth=128
00:22:33.964  
00:22:33.964  Run status group 0 (all jobs):
00:22:33.964     READ: bw=18.7MiB/s (19.6MB/s), 9589KiB/s-9655KiB/s (9819kB/s-9886kB/s), io=510MiB (534MB), run=27027-27207msec
00:22:33.964    WRITE: bw=22.6MiB/s (23.7MB/s), 11.3MiB/s-12.8MiB/s (11.9MB/s-13.4MB/s), io=512MiB (537MB), run=19995-22614msec
00:22:33.964  -----------------------------------------------------
00:22:33.964  Suppressions used:
00:22:33.964    count      bytes template
00:22:33.964        2         10 /usr/src/fio/parse.c
00:22:33.964        3        288 /usr/src/fio/iolog.c
00:22:33.964        1          8 libtcmalloc_minimal.so
00:22:33.964        1        904 libcrypto.so
00:22:33.964  -----------------------------------------------------
00:22:33.964  
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests}
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib=
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:22:33.964    16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:22:33.964    16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan
00:22:33.964    16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:22:33.964   16:33:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio
00:22:33.964  test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:22:33.964  fio-3.35
00:22:33.964  Starting 1 thread
00:22:52.117  
00:22:52.117  test: (groupid=0, jobs=1): err= 0: pid=78610: Mon Dec  9 16:33:20 2024
00:22:52.117    read: IOPS=6244, BW=24.4MiB/s (25.6MB/s)(255MiB/10441msec)
00:22:52.117      slat (nsec): min=3393, max=51104, avg=7977.31, stdev=3552.69
00:22:52.117      clat (usec): min=744, max=39294, avg=20484.35, stdev=1186.53
00:22:52.117       lat (usec): min=756, max=39307, avg=20492.33, stdev=1186.42
00:22:52.117      clat percentiles (usec):
00:22:52.117       |  1.00th=[19268],  5.00th=[19530], 10.00th=[19792], 20.00th=[20055],
00:22:52.117       | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20579],
00:22:52.117       | 70.00th=[20579], 80.00th=[20841], 90.00th=[21103], 95.00th=[21627],
00:22:52.117       | 99.00th=[23987], 99.50th=[28181], 99.90th=[32637], 99.95th=[34341],
00:22:52.117       | 99.99th=[38536]
00:22:52.117    write: IOPS=11.2k, BW=43.6MiB/s (45.7MB/s)(256MiB/5876msec); 0 zone resets
00:22:52.117      slat (usec): min=4, max=747, avg= 8.84, stdev= 8.03
00:22:52.117      clat (usec): min=593, max=70911, avg=11415.37, stdev=15573.02
00:22:52.117       lat (usec): min=601, max=70917, avg=11424.21, stdev=15573.27
00:22:52.117      clat percentiles (usec):
00:22:52.117       |  1.00th=[ 1090],  5.00th=[ 1336], 10.00th=[ 1516], 20.00th=[ 1811],
00:22:52.117       | 30.00th=[ 2114], 40.00th=[ 2966], 50.00th=[ 6390], 60.00th=[ 7701],
00:22:52.117       | 70.00th=[ 8586], 80.00th=[10421], 90.00th=[40109], 95.00th=[48497],
00:22:52.117       | 99.00th=[62129], 99.50th=[64750], 99.90th=[67634], 99.95th=[68682],
00:22:52.117       | 99.99th=[69731]
00:22:52.117     bw (  KiB/s): min=27120, max=68168, per=97.93%, avg=43690.67, stdev=14312.02, samples=12
00:22:52.117     iops        : min= 6780, max=17042, avg=10922.67, stdev=3578.01, samples=12
00:22:52.117    lat (usec)   : 750=0.01%, 1000=0.23%
00:22:52.117    lat (msec)   : 2=13.10%, 4=7.65%, 10=18.19%, 20=14.98%, 50=43.57%
00:22:52.117    lat (msec)   : 100=2.27%
00:22:52.117    cpu          : usr=98.83%, sys=0.39%, ctx=27, majf=0, minf=5565
00:22:52.117    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:22:52.117       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:22:52.117       complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1%
00:22:52.117       issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:22:52.117       latency   : target=0, window=0, percentile=100.00%, depth=128
00:22:52.117  
00:22:52.117  Run status group 0 (all jobs):
00:22:52.117     READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=255MiB (267MB), run=10441-10441msec
00:22:52.117    WRITE: bw=43.6MiB/s (45.7MB/s), 43.6MiB/s-43.6MiB/s (45.7MB/s-45.7MB/s), io=256MiB (268MB), run=5876-5876msec
00:22:53.056  -----------------------------------------------------
00:22:53.056  Suppressions used:
00:22:53.056    count      bytes template
00:22:53.056        1          5 /usr/src/fio/parse.c
00:22:53.056        2        192 /usr/src/fio/iolog.c
00:22:53.056        1          8 libtcmalloc_minimal.so
00:22:53.056        1        904 libcrypto.so
00:22:53.056  -----------------------------------------------------
00:22:53.056  
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:22:53.056  Remove shared memory files
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58937 /dev/shm/spdk_tgt_trace.pid76814
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f
00:22:53.056  ************************************
00:22:53.056  END TEST ftl_fio_basic
00:22:53.056  ************************************
00:22:53.056  
00:22:53.056  real	1m15.195s
00:22:53.056  user	2m44.232s
00:22:53.056  sys	0m4.165s
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:53.056   16:33:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:22:53.315   16:33:22 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0
00:22:53.315   16:33:22 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:22:53.315   16:33:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:53.315   16:33:22 ftl -- common/autotest_common.sh@10 -- # set +x
00:22:53.315  ************************************
00:22:53.315  START TEST ftl_bdevperf
00:22:53.315  ************************************
00:22:53.315   16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0
00:22:53.315  * Looking for test storage...
00:22:53.315  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:22:53.315     16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version
00:22:53.315     16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-:
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-:
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<'
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:53.315     16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1
00:22:53.315     16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1
00:22:53.315     16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:53.315     16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1
00:22:53.315     16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2
00:22:53.315     16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2
00:22:53.315     16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:53.315     16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:22:53.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:53.315  		--rc genhtml_branch_coverage=1
00:22:53.315  		--rc genhtml_function_coverage=1
00:22:53.315  		--rc genhtml_legend=1
00:22:53.315  		--rc geninfo_all_blocks=1
00:22:53.315  		--rc geninfo_unexecuted_blocks=1
00:22:53.315  		
00:22:53.315  		'
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:22:53.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:53.315  		--rc genhtml_branch_coverage=1
00:22:53.315  		--rc genhtml_function_coverage=1
00:22:53.315  		--rc genhtml_legend=1
00:22:53.315  		--rc geninfo_all_blocks=1
00:22:53.315  		--rc geninfo_unexecuted_blocks=1
00:22:53.315  		
00:22:53.315  		'
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:22:53.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:53.315  		--rc genhtml_branch_coverage=1
00:22:53.315  		--rc genhtml_function_coverage=1
00:22:53.315  		--rc genhtml_legend=1
00:22:53.315  		--rc geninfo_all_blocks=1
00:22:53.315  		--rc geninfo_unexecuted_blocks=1
00:22:53.315  		
00:22:53.315  		'
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:22:53.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:53.315  		--rc genhtml_branch_coverage=1
00:22:53.315  		--rc genhtml_function_coverage=1
00:22:53.315  		--rc genhtml_legend=1
00:22:53.315  		--rc geninfo_all_blocks=1
00:22:53.315  		--rc geninfo_unexecuted_blocks=1
00:22:53.315  		
00:22:53.315  		'
00:22:53.315   16:33:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:22:53.315      16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh
00:22:53.315     16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:22:53.315    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:22:53.575     16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid=
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:22:53.575    16:33:22 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append=
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78878
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78878
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78878 ']'
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:53.575  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:53.575   16:33:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:22:53.575  [2024-12-09 16:33:22.599082] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:22:53.575  [2024-12-09 16:33:22.599388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78878 ]
00:22:53.833  [2024-12-09 16:33:22.778966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:53.833  [2024-12-09 16:33:22.881088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:22:54.401   16:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:54.401   16:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:22:54.401    16:33:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:22:54.401    16:33:23 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0
00:22:54.401    16:33:23 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:22:54.401    16:33:23 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424
00:22:54.401    16:33:23 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev
00:22:54.401     16:33:23 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:22:54.659    16:33:23 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:22:54.659    16:33:23 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size
00:22:54.659     16:33:23 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:22:54.659     16:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:22:54.659     16:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:22:54.659     16:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:22:54.659     16:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:22:54.659      16:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:22:54.919     16:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:22:54.919    {
00:22:54.919      "name": "nvme0n1",
00:22:54.919      "aliases": [
00:22:54.919        "319a6090-c549-4615-b9a8-317d51e343e8"
00:22:54.919      ],
00:22:54.919      "product_name": "NVMe disk",
00:22:54.919      "block_size": 4096,
00:22:54.919      "num_blocks": 1310720,
00:22:54.919      "uuid": "319a6090-c549-4615-b9a8-317d51e343e8",
00:22:54.919      "numa_id": -1,
00:22:54.919      "assigned_rate_limits": {
00:22:54.919        "rw_ios_per_sec": 0,
00:22:54.919        "rw_mbytes_per_sec": 0,
00:22:54.919        "r_mbytes_per_sec": 0,
00:22:54.919        "w_mbytes_per_sec": 0
00:22:54.919      },
00:22:54.919      "claimed": true,
00:22:54.919      "claim_type": "read_many_write_one",
00:22:54.919      "zoned": false,
00:22:54.919      "supported_io_types": {
00:22:54.919        "read": true,
00:22:54.919        "write": true,
00:22:54.919        "unmap": true,
00:22:54.919        "flush": true,
00:22:54.919        "reset": true,
00:22:54.919        "nvme_admin": true,
00:22:54.919        "nvme_io": true,
00:22:54.919        "nvme_io_md": false,
00:22:54.919        "write_zeroes": true,
00:22:54.919        "zcopy": false,
00:22:54.919        "get_zone_info": false,
00:22:54.919        "zone_management": false,
00:22:54.919        "zone_append": false,
00:22:54.919        "compare": true,
00:22:54.919        "compare_and_write": false,
00:22:54.919        "abort": true,
00:22:54.919        "seek_hole": false,
00:22:54.919        "seek_data": false,
00:22:54.919        "copy": true,
00:22:54.919        "nvme_iov_md": false
00:22:54.919      },
00:22:54.919      "driver_specific": {
00:22:54.919        "nvme": [
00:22:54.919          {
00:22:54.919            "pci_address": "0000:00:11.0",
00:22:54.919            "trid": {
00:22:54.919              "trtype": "PCIe",
00:22:54.919              "traddr": "0000:00:11.0"
00:22:54.919            },
00:22:54.919            "ctrlr_data": {
00:22:54.919              "cntlid": 0,
00:22:54.919              "vendor_id": "0x1b36",
00:22:54.919              "model_number": "QEMU NVMe Ctrl",
00:22:54.919              "serial_number": "12341",
00:22:54.919              "firmware_revision": "8.0.0",
00:22:54.919              "subnqn": "nqn.2019-08.org.qemu:12341",
00:22:54.919              "oacs": {
00:22:54.919                "security": 0,
00:22:54.919                "format": 1,
00:22:54.919                "firmware": 0,
00:22:54.919                "ns_manage": 1
00:22:54.919              },
00:22:54.919              "multi_ctrlr": false,
00:22:54.919              "ana_reporting": false
00:22:54.919            },
00:22:54.919            "vs": {
00:22:54.919              "nvme_version": "1.4"
00:22:54.919            },
00:22:54.919            "ns_data": {
00:22:54.919              "id": 1,
00:22:54.919              "can_share": false
00:22:54.919            }
00:22:54.919          }
00:22:54.919        ],
00:22:54.919        "mp_policy": "active_passive"
00:22:54.919      }
00:22:54.919    }
00:22:54.919  ]'
00:22:54.919      16:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:22:54.919     16:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:22:54.919      16:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:22:54.919     16:33:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720
00:22:54.919     16:33:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:22:54.919     16:33:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120
00:22:54.919    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120
00:22:54.919    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:22:54.919    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols
00:22:54.919     16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:22:54.919     16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:22:55.178    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=b58bf2b2-5374-43e5-868f-b9ab272a8d83
00:22:55.178    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores
00:22:55.178    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b58bf2b2-5374-43e5-868f-b9ab272a8d83
00:22:55.436     16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:22:55.694    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=34a50bd0-c10d-447e-b805-73db68c4f6cc
00:22:55.694    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 34a50bd0-c10d-447e-b805-73db68c4f6cc
00:22:55.694   16:33:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=8b9309af-5db0-40a3-8525-861e2bf48636
00:22:55.694    16:33:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8b9309af-5db0-40a3-8525-861e2bf48636
00:22:55.694    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0
00:22:55.694    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:22:55.695    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8b9309af-5db0-40a3-8525-861e2bf48636
00:22:55.695    16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size=
00:22:55.695     16:33:24 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8b9309af-5db0-40a3-8525-861e2bf48636
00:22:55.695     16:33:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8b9309af-5db0-40a3-8525-861e2bf48636
00:22:55.695     16:33:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:22:55.695     16:33:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:22:55.695     16:33:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:22:55.695      16:33:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b9309af-5db0-40a3-8525-861e2bf48636
00:22:55.953     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:22:55.953    {
00:22:55.953      "name": "8b9309af-5db0-40a3-8525-861e2bf48636",
00:22:55.953      "aliases": [
00:22:55.953        "lvs/nvme0n1p0"
00:22:55.953      ],
00:22:55.953      "product_name": "Logical Volume",
00:22:55.953      "block_size": 4096,
00:22:55.953      "num_blocks": 26476544,
00:22:55.953      "uuid": "8b9309af-5db0-40a3-8525-861e2bf48636",
00:22:55.953      "assigned_rate_limits": {
00:22:55.953        "rw_ios_per_sec": 0,
00:22:55.953        "rw_mbytes_per_sec": 0,
00:22:55.953        "r_mbytes_per_sec": 0,
00:22:55.953        "w_mbytes_per_sec": 0
00:22:55.953      },
00:22:55.953      "claimed": false,
00:22:55.953      "zoned": false,
00:22:55.953      "supported_io_types": {
00:22:55.953        "read": true,
00:22:55.953        "write": true,
00:22:55.953        "unmap": true,
00:22:55.953        "flush": false,
00:22:55.953        "reset": true,
00:22:55.953        "nvme_admin": false,
00:22:55.953        "nvme_io": false,
00:22:55.953        "nvme_io_md": false,
00:22:55.953        "write_zeroes": true,
00:22:55.953        "zcopy": false,
00:22:55.953        "get_zone_info": false,
00:22:55.953        "zone_management": false,
00:22:55.953        "zone_append": false,
00:22:55.953        "compare": false,
00:22:55.953        "compare_and_write": false,
00:22:55.953        "abort": false,
00:22:55.953        "seek_hole": true,
00:22:55.953        "seek_data": true,
00:22:55.953        "copy": false,
00:22:55.953        "nvme_iov_md": false
00:22:55.953      },
00:22:55.953      "driver_specific": {
00:22:55.953        "lvol": {
00:22:55.953          "lvol_store_uuid": "34a50bd0-c10d-447e-b805-73db68c4f6cc",
00:22:55.953          "base_bdev": "nvme0n1",
00:22:55.953          "thin_provision": true,
00:22:55.953          "num_allocated_clusters": 0,
00:22:55.953          "snapshot": false,
00:22:55.953          "clone": false,
00:22:55.953          "esnap_clone": false
00:22:55.953        }
00:22:55.953      }
00:22:55.953    }
00:22:55.953  ]'
00:22:55.953      16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:22:55.953     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:22:55.953      16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:22:55.953     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544
00:22:55.953     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:22:55.953     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424
00:22:55.953    16:33:25 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171
00:22:55.953    16:33:25 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev
00:22:55.953     16:33:25 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:22:56.520    16:33:25 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:22:56.520    16:33:25 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]]
00:22:56.520     16:33:25 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8b9309af-5db0-40a3-8525-861e2bf48636
00:22:56.520     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8b9309af-5db0-40a3-8525-861e2bf48636
00:22:56.520     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:22:56.520     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:22:56.520     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:22:56.520      16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b9309af-5db0-40a3-8525-861e2bf48636
00:22:56.520     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:22:56.520    {
00:22:56.520      "name": "8b9309af-5db0-40a3-8525-861e2bf48636",
00:22:56.520      "aliases": [
00:22:56.520        "lvs/nvme0n1p0"
00:22:56.520      ],
00:22:56.520      "product_name": "Logical Volume",
00:22:56.520      "block_size": 4096,
00:22:56.520      "num_blocks": 26476544,
00:22:56.520      "uuid": "8b9309af-5db0-40a3-8525-861e2bf48636",
00:22:56.520      "assigned_rate_limits": {
00:22:56.520        "rw_ios_per_sec": 0,
00:22:56.520        "rw_mbytes_per_sec": 0,
00:22:56.520        "r_mbytes_per_sec": 0,
00:22:56.520        "w_mbytes_per_sec": 0
00:22:56.520      },
00:22:56.520      "claimed": false,
00:22:56.520      "zoned": false,
00:22:56.520      "supported_io_types": {
00:22:56.520        "read": true,
00:22:56.520        "write": true,
00:22:56.520        "unmap": true,
00:22:56.520        "flush": false,
00:22:56.520        "reset": true,
00:22:56.520        "nvme_admin": false,
00:22:56.520        "nvme_io": false,
00:22:56.520        "nvme_io_md": false,
00:22:56.520        "write_zeroes": true,
00:22:56.520        "zcopy": false,
00:22:56.520        "get_zone_info": false,
00:22:56.520        "zone_management": false,
00:22:56.520        "zone_append": false,
00:22:56.520        "compare": false,
00:22:56.520        "compare_and_write": false,
00:22:56.520        "abort": false,
00:22:56.520        "seek_hole": true,
00:22:56.520        "seek_data": true,
00:22:56.520        "copy": false,
00:22:56.520        "nvme_iov_md": false
00:22:56.520      },
00:22:56.520      "driver_specific": {
00:22:56.520        "lvol": {
00:22:56.520          "lvol_store_uuid": "34a50bd0-c10d-447e-b805-73db68c4f6cc",
00:22:56.520          "base_bdev": "nvme0n1",
00:22:56.520          "thin_provision": true,
00:22:56.520          "num_allocated_clusters": 0,
00:22:56.520          "snapshot": false,
00:22:56.520          "clone": false,
00:22:56.520          "esnap_clone": false
00:22:56.520        }
00:22:56.520      }
00:22:56.520    }
00:22:56.520  ]'
00:22:56.520      16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:22:56.521     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:22:56.521      16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:22:56.521     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544
00:22:56.521     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:22:56.521     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424
00:22:56.521    16:33:25 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171
00:22:56.521    16:33:25 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:22:56.780   16:33:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0
00:22:56.780    16:33:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 8b9309af-5db0-40a3-8525-861e2bf48636
00:22:56.780    16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8b9309af-5db0-40a3-8525-861e2bf48636
00:22:56.780    16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:22:56.780    16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:22:56.780    16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:22:56.780     16:33:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b9309af-5db0-40a3-8525-861e2bf48636
00:22:57.039    16:33:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:22:57.039    {
00:22:57.039      "name": "8b9309af-5db0-40a3-8525-861e2bf48636",
00:22:57.039      "aliases": [
00:22:57.039        "lvs/nvme0n1p0"
00:22:57.039      ],
00:22:57.039      "product_name": "Logical Volume",
00:22:57.039      "block_size": 4096,
00:22:57.039      "num_blocks": 26476544,
00:22:57.039      "uuid": "8b9309af-5db0-40a3-8525-861e2bf48636",
00:22:57.039      "assigned_rate_limits": {
00:22:57.039        "rw_ios_per_sec": 0,
00:22:57.039        "rw_mbytes_per_sec": 0,
00:22:57.039        "r_mbytes_per_sec": 0,
00:22:57.039        "w_mbytes_per_sec": 0
00:22:57.039      },
00:22:57.039      "claimed": false,
00:22:57.039      "zoned": false,
00:22:57.039      "supported_io_types": {
00:22:57.039        "read": true,
00:22:57.039        "write": true,
00:22:57.039        "unmap": true,
00:22:57.039        "flush": false,
00:22:57.039        "reset": true,
00:22:57.039        "nvme_admin": false,
00:22:57.039        "nvme_io": false,
00:22:57.039        "nvme_io_md": false,
00:22:57.039        "write_zeroes": true,
00:22:57.039        "zcopy": false,
00:22:57.039        "get_zone_info": false,
00:22:57.039        "zone_management": false,
00:22:57.039        "zone_append": false,
00:22:57.039        "compare": false,
00:22:57.039        "compare_and_write": false,
00:22:57.039        "abort": false,
00:22:57.039        "seek_hole": true,
00:22:57.039        "seek_data": true,
00:22:57.039        "copy": false,
00:22:57.039        "nvme_iov_md": false
00:22:57.039      },
00:22:57.039      "driver_specific": {
00:22:57.039        "lvol": {
00:22:57.039          "lvol_store_uuid": "34a50bd0-c10d-447e-b805-73db68c4f6cc",
00:22:57.039          "base_bdev": "nvme0n1",
00:22:57.039          "thin_provision": true,
00:22:57.039          "num_allocated_clusters": 0,
00:22:57.039          "snapshot": false,
00:22:57.039          "clone": false,
00:22:57.039          "esnap_clone": false
00:22:57.039        }
00:22:57.039      }
00:22:57.039    }
00:22:57.039  ]'
00:22:57.039     16:33:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:22:57.039    16:33:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:22:57.039     16:33:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:22:57.039    16:33:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544
00:22:57.039    16:33:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:22:57.039    16:33:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424
00:22:57.039   16:33:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20
00:22:57.039   16:33:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8b9309af-5db0-40a3-8525-861e2bf48636 -c nvc0n1p0 --l2p_dram_limit 20
00:22:57.299  [2024-12-09 16:33:26.323474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.299  [2024-12-09 16:33:26.323531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:22:57.299  [2024-12-09 16:33:26.323547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:22:57.299  [2024-12-09 16:33:26.323559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.299  [2024-12-09 16:33:26.323619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.299  [2024-12-09 16:33:26.323634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:22:57.299  [2024-12-09 16:33:26.323644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.044 ms
00:22:57.299  [2024-12-09 16:33:26.323656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.299  [2024-12-09 16:33:26.323673] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:22:57.299  [2024-12-09 16:33:26.324652] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:22:57.299  [2024-12-09 16:33:26.324679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.299  [2024-12-09 16:33:26.324693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:22:57.299  [2024-12-09 16:33:26.324713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.012 ms
00:22:57.299  [2024-12-09 16:33:26.324727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.299  [2024-12-09 16:33:26.324842] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8e861fa4-50f5-4800-bacd-110f7b2ac5e9
00:22:57.299  [2024-12-09 16:33:26.326477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.299  [2024-12-09 16:33:26.326604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:22:57.299  [2024-12-09 16:33:26.326710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.017 ms
00:22:57.299  [2024-12-09 16:33:26.326748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.299  [2024-12-09 16:33:26.334378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.299  [2024-12-09 16:33:26.334513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:22:57.299  [2024-12-09 16:33:26.334607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.560 ms
00:22:57.299  [2024-12-09 16:33:26.334647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.299  [2024-12-09 16:33:26.334772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.299  [2024-12-09 16:33:26.334954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:22:57.299  [2024-12-09 16:33:26.335004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.079 ms
00:22:57.299  [2024-12-09 16:33:26.335034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.299  [2024-12-09 16:33:26.335111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.299  [2024-12-09 16:33:26.335198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:22:57.299  [2024-12-09 16:33:26.335238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:22:57.299  [2024-12-09 16:33:26.335269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.299  [2024-12-09 16:33:26.335335] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:22:57.299  [2024-12-09 16:33:26.340432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.299  [2024-12-09 16:33:26.340580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:22:57.299  [2024-12-09 16:33:26.340674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.120 ms
00:22:57.299  [2024-12-09 16:33:26.340717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.299  [2024-12-09 16:33:26.340776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.299  [2024-12-09 16:33:26.340946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:22:57.299  [2024-12-09 16:33:26.340964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:22:57.299  [2024-12-09 16:33:26.340977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.299  [2024-12-09 16:33:26.341036] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:22:57.299  [2024-12-09 16:33:26.341175] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:22:57.299  [2024-12-09 16:33:26.341190] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:22:57.299  [2024-12-09 16:33:26.341206] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:22:57.299  [2024-12-09 16:33:26.341219] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:22:57.299  [2024-12-09 16:33:26.341236] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:22:57.299  [2024-12-09 16:33:26.341248] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:22:57.299  [2024-12-09 16:33:26.341260] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:22:57.299  [2024-12-09 16:33:26.341270] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:22:57.299  [2024-12-09 16:33:26.341282] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:22:57.299  [2024-12-09 16:33:26.341295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.299  [2024-12-09 16:33:26.341308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:22:57.299  [2024-12-09 16:33:26.341318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.260 ms
00:22:57.299  [2024-12-09 16:33:26.341331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.299  [2024-12-09 16:33:26.341404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.299  [2024-12-09 16:33:26.341419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:22:57.299  [2024-12-09 16:33:26.341430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.057 ms
00:22:57.299  [2024-12-09 16:33:26.341445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.299  [2024-12-09 16:33:26.341524] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:22:57.299  [2024-12-09 16:33:26.341541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:22:57.299  [2024-12-09 16:33:26.341552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:22:57.299  [2024-12-09 16:33:26.341565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:57.299  [2024-12-09 16:33:26.341575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:22:57.299  [2024-12-09 16:33:26.341587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:22:57.299  [2024-12-09 16:33:26.341596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:22:57.299  [2024-12-09 16:33:26.341608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:22:57.299  [2024-12-09 16:33:26.341617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:22:57.299  [2024-12-09 16:33:26.341630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:22:57.299  [2024-12-09 16:33:26.341640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:22:57.299  [2024-12-09 16:33:26.341663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:22:57.299  [2024-12-09 16:33:26.341672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:22:57.299  [2024-12-09 16:33:26.341684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:22:57.299  [2024-12-09 16:33:26.341694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:22:57.299  [2024-12-09 16:33:26.341711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:57.299  [2024-12-09 16:33:26.341720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:22:57.299  [2024-12-09 16:33:26.341732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:22:57.299  [2024-12-09 16:33:26.341742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:57.299  [2024-12-09 16:33:26.341754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:22:57.299  [2024-12-09 16:33:26.341763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:22:57.299  [2024-12-09 16:33:26.341775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:57.299  [2024-12-09 16:33:26.341785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:22:57.299  [2024-12-09 16:33:26.341796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:22:57.299  [2024-12-09 16:33:26.341805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:57.299  [2024-12-09 16:33:26.341817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:22:57.299  [2024-12-09 16:33:26.341826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:22:57.299  [2024-12-09 16:33:26.341838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:57.299  [2024-12-09 16:33:26.341846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:22:57.299  [2024-12-09 16:33:26.341858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:22:57.299  [2024-12-09 16:33:26.341867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:57.299  [2024-12-09 16:33:26.341881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:22:57.299  [2024-12-09 16:33:26.341891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:22:57.299  [2024-12-09 16:33:26.342127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:22:57.299  [2024-12-09 16:33:26.342162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:22:57.299  [2024-12-09 16:33:26.342196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:22:57.299  [2024-12-09 16:33:26.342226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:22:57.299  [2024-12-09 16:33:26.342258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:22:57.299  [2024-12-09 16:33:26.342287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:22:57.299  [2024-12-09 16:33:26.342376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:57.299  [2024-12-09 16:33:26.342412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:22:57.299  [2024-12-09 16:33:26.342444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:22:57.299  [2024-12-09 16:33:26.342474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:57.299  [2024-12-09 16:33:26.342505] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:22:57.299  [2024-12-09 16:33:26.342544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:22:57.299  [2024-12-09 16:33:26.342578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:22:57.299  [2024-12-09 16:33:26.342608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:57.300  [2024-12-09 16:33:26.342732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:22:57.300  [2024-12-09 16:33:26.342763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:22:57.300  [2024-12-09 16:33:26.342795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:22:57.300  [2024-12-09 16:33:26.342825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:22:57.300  [2024-12-09 16:33:26.342857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:22:57.300  [2024-12-09 16:33:26.342886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:22:57.300  [2024-12-09 16:33:26.342935] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:22:57.300  [2024-12-09 16:33:26.343036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:22:57.300  [2024-12-09 16:33:26.343093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:22:57.300  [2024-12-09 16:33:26.343141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:22:57.300  [2024-12-09 16:33:26.343191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:22:57.300  [2024-12-09 16:33:26.343317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:22:57.300  [2024-12-09 16:33:26.343334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:22:57.300  [2024-12-09 16:33:26.343345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:22:57.300  [2024-12-09 16:33:26.343358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:22:57.300  [2024-12-09 16:33:26.343369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:22:57.300  [2024-12-09 16:33:26.343385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:22:57.300  [2024-12-09 16:33:26.343395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:22:57.300  [2024-12-09 16:33:26.343408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:22:57.300  [2024-12-09 16:33:26.343418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:22:57.300  [2024-12-09 16:33:26.343431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:22:57.300  [2024-12-09 16:33:26.343442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:22:57.300  [2024-12-09 16:33:26.343455] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:22:57.300  [2024-12-09 16:33:26.343468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:22:57.300  [2024-12-09 16:33:26.343485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:22:57.300  [2024-12-09 16:33:26.343496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:22:57.300  [2024-12-09 16:33:26.343509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:22:57.300  [2024-12-09 16:33:26.343520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:22:57.300  [2024-12-09 16:33:26.343535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:57.300  [2024-12-09 16:33:26.343546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:22:57.300  [2024-12-09 16:33:26.343560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.063 ms
00:22:57.300  [2024-12-09 16:33:26.343570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:57.300  [2024-12-09 16:33:26.343618] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:22:57.300  [2024-12-09 16:33:26.343631] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:23:01.494  [2024-12-09 16:33:30.166565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.494  [2024-12-09 16:33:30.166635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:23:01.494  [2024-12-09 16:33:30.166654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3829.148 ms
00:23:01.495  [2024-12-09 16:33:30.166664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.204318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.204368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:01.495  [2024-12-09 16:33:30.204385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 37.407 ms
00:23:01.495  [2024-12-09 16:33:30.204396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.204529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.204542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:01.495  [2024-12-09 16:33:30.204558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.052 ms
00:23:01.495  [2024-12-09 16:33:30.204567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.261163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.261208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:01.495  [2024-12-09 16:33:30.261224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 56.646 ms
00:23:01.495  [2024-12-09 16:33:30.261234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.261273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.261284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:01.495  [2024-12-09 16:33:30.261297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:23:01.495  [2024-12-09 16:33:30.261309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.261772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.261785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:01.495  [2024-12-09 16:33:30.261798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.413 ms
00:23:01.495  [2024-12-09 16:33:30.261807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.261929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.261958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:01.495  [2024-12-09 16:33:30.261974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.103 ms
00:23:01.495  [2024-12-09 16:33:30.261983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.280597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.280633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:01.495  [2024-12-09 16:33:30.280649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.621 ms
00:23:01.495  [2024-12-09 16:33:30.280670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.292890] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB
00:23:01.495  [2024-12-09 16:33:30.298816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.298853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:01.495  [2024-12-09 16:33:30.298866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.088 ms
00:23:01.495  [2024-12-09 16:33:30.298878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.391231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.391291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:23:01.495  [2024-12-09 16:33:30.391306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 92.461 ms
00:23:01.495  [2024-12-09 16:33:30.391319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.391486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.391505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:01.495  [2024-12-09 16:33:30.391516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.128 ms
00:23:01.495  [2024-12-09 16:33:30.391531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.427836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.428040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:23:01.495  [2024-12-09 16:33:30.428065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.298 ms
00:23:01.495  [2024-12-09 16:33:30.428078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.462215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.462258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:23:01.495  [2024-12-09 16:33:30.462272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.153 ms
00:23:01.495  [2024-12-09 16:33:30.462284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.462987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.463007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:01.495  [2024-12-09 16:33:30.463018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.668 ms
00:23:01.495  [2024-12-09 16:33:30.463031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.560115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.560305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:23:01.495  [2024-12-09 16:33:30.560344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 97.190 ms
00:23:01.495  [2024-12-09 16:33:30.560358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.595524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.595571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:23:01.495  [2024-12-09 16:33:30.595588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.144 ms
00:23:01.495  [2024-12-09 16:33:30.595602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.631156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.631197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:23:01.495  [2024-12-09 16:33:30.631211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.571 ms
00:23:01.495  [2024-12-09 16:33:30.631239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.665181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.665235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:01.495  [2024-12-09 16:33:30.665248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.958 ms
00:23:01.495  [2024-12-09 16:33:30.665261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.665302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.665319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:01.495  [2024-12-09 16:33:30.665329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:23:01.495  [2024-12-09 16:33:30.665357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.665452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:01.495  [2024-12-09 16:33:30.665468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:01.495  [2024-12-09 16:33:30.665478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.033 ms
00:23:01.495  [2024-12-09 16:33:30.665490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:01.495  [2024-12-09 16:33:30.666684] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4349.849 ms, result 0
00:23:01.755  {
00:23:01.755    "name": "ftl0",
00:23:01.755    "uuid": "8e861fa4-50f5-4800-bacd-110f7b2ac5e9"
00:23:01.755  }
00:23:01.755   16:33:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0
00:23:01.755   16:33:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name
00:23:01.755   16:33:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0
00:23:01.755   16:33:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632
00:23:02.013  [2024-12-09 16:33:30.970423] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0
00:23:02.013  I/O size of 69632 is greater than zero copy threshold (65536).
00:23:02.013  Zero copy mechanism will not be used.
00:23:02.013  Running I/O for 4 seconds...
00:23:03.887       1438.00 IOPS,    95.49 MiB/s
[2024-12-09T16:33:34.003Z]      1459.50 IOPS,    96.92 MiB/s
[2024-12-09T16:33:35.379Z]      1473.33 IOPS,    97.84 MiB/s
[2024-12-09T16:33:35.379Z]      1494.25 IOPS,    99.23 MiB/s
00:23:06.200                                                                                                  Latency(us)
00:23:06.200  
[2024-12-09T16:33:35.379Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:06.200  Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632)
00:23:06.200  	 ftl0                :       4.00    1493.70      99.19       0.00     0.00     701.36     253.33    2684.61
00:23:06.200  
[2024-12-09T16:33:35.379Z]  ===================================================================================================================
00:23:06.200  
[2024-12-09T16:33:35.379Z]  Total                       :               1493.70      99.19       0.00     0.00     701.36     253.33    2684.61
00:23:06.200  [2024-12-09 16:33:34.975715] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0
00:23:06.200  {
00:23:06.200    "results": [
00:23:06.200      {
00:23:06.200        "job": "ftl0",
00:23:06.200        "core_mask": "0x1",
00:23:06.200        "workload": "randwrite",
00:23:06.200        "status": "finished",
00:23:06.200        "queue_depth": 1,
00:23:06.201        "io_size": 69632,
00:23:06.201        "runtime": 4.00215,
00:23:06.201        "iops": 1493.6971377884388,
00:23:06.201        "mibps": 99.19082555626352,
00:23:06.201        "io_failed": 0,
00:23:06.201        "io_timeout": 0,
00:23:06.201        "avg_latency_us": 701.3591199861339,
00:23:06.201        "min_latency_us": 253.3269076305221,
00:23:06.201        "max_latency_us": 2684.6072289156627
00:23:06.201      }
00:23:06.201    ],
00:23:06.201    "core_count": 1
00:23:06.201  }
00:23:06.201   16:33:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096
00:23:06.201  [2024-12-09 16:33:35.092688] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0
00:23:06.201  Running I/O for 4 seconds...
00:23:08.077      11213.00 IOPS,    43.80 MiB/s
[2024-12-09T16:33:38.193Z]     11115.50 IOPS,    43.42 MiB/s
[2024-12-09T16:33:39.128Z]     10695.00 IOPS,    41.78 MiB/s
[2024-12-09T16:33:39.128Z]     10811.25 IOPS,    42.23 MiB/s
00:23:09.949                                                                                                  Latency(us)
00:23:09.949  
[2024-12-09T16:33:39.128Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:09.949  Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096)
00:23:09.949  	 ftl0                :       4.02   10801.51      42.19       0.00     0.00   11826.23     220.43   33478.63
00:23:09.949  
[2024-12-09T16:33:39.128Z]  ===================================================================================================================
00:23:09.949  
[2024-12-09T16:33:39.128Z]  Total                       :              10801.51      42.19       0.00     0.00   11826.23       0.00   33478.63
00:23:09.949  {
00:23:09.949    "results": [
00:23:09.949      {
00:23:09.949        "job": "ftl0",
00:23:09.949        "core_mask": "0x1",
00:23:09.949        "workload": "randwrite",
00:23:09.949        "status": "finished",
00:23:09.949        "queue_depth": 128,
00:23:09.949        "io_size": 4096,
00:23:09.949        "runtime": 4.015457,
00:23:09.949        "iops": 10801.510263962484,
00:23:09.949        "mibps": 42.19339946860345,
00:23:09.949        "io_failed": 0,
00:23:09.949        "io_timeout": 0,
00:23:09.949        "avg_latency_us": 11826.225411122738,
00:23:09.950        "min_latency_us": 220.4273092369478,
00:23:09.950        "max_latency_us": 33478.631325301205
00:23:09.950      }
00:23:09.950    ],
00:23:09.950    "core_count": 1
00:23:09.950  }
00:23:09.950  [2024-12-09 16:33:39.111799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0
00:23:10.208   16:33:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096
00:23:10.208  [2024-12-09 16:33:39.231279] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0
00:23:10.208  Running I/O for 4 seconds...
00:23:12.081       8608.00 IOPS,    33.62 MiB/s
[2024-12-09T16:33:42.639Z]      8664.50 IOPS,    33.85 MiB/s
[2024-12-09T16:33:43.576Z]      8724.67 IOPS,    34.08 MiB/s
[2024-12-09T16:33:43.576Z]      8443.75 IOPS,    32.98 MiB/s
00:23:14.397                                                                                                  Latency(us)
00:23:14.397  
[2024-12-09T16:33:43.577Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:14.398  Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:23:14.398  	 Verification LBA range: start 0x0 length 0x1400000
00:23:14.398  	 ftl0                :       4.01    8453.95      33.02       0.00     0.00   15094.66     263.20   31162.50
00:23:14.398  
[2024-12-09T16:33:43.577Z]  ===================================================================================================================
00:23:14.398  
[2024-12-09T16:33:43.577Z]  Total                       :               8453.95      33.02       0.00     0.00   15094.66       0.00   31162.50
00:23:14.398  [2024-12-09 16:33:43.254167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{
00:23:14.398    "results": [
00:23:14.398      {
00:23:14.398        "job": "ftl0",
00:23:14.398        "core_mask": "0x1",
00:23:14.398        "workload": "verify",
00:23:14.398        "status": "finished",
00:23:14.398        "verify_range": {
00:23:14.398          "start": 0,
00:23:14.398          "length": 20971520
00:23:14.398        },
00:23:14.398        "queue_depth": 128,
00:23:14.398        "io_size": 4096,
00:23:14.398        "runtime": 4.010197,
00:23:14.398        "iops": 8453.948771095285,
00:23:14.398        "mibps": 33.02323738709096,
00:23:14.398        "io_failed": 0,
00:23:14.398        "io_timeout": 0,
00:23:14.398        "avg_latency_us": 15094.661905293286,
00:23:14.398        "min_latency_us": 263.19678714859435,
00:23:14.398        "max_latency_us": 31162.499598393573
00:23:14.398      }
00:23:14.398    ],
00:23:14.398    "core_count": 1
00:23:14.398  }
00:23:14.398  l0
00:23:14.398   16:33:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0
00:23:14.398  [2024-12-09 16:33:43.461661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.398  [2024-12-09 16:33:43.461853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:23:14.398  [2024-12-09 16:33:43.461963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:23:14.398  [2024-12-09 16:33:43.462008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.398  [2024-12-09 16:33:43.462063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:14.398  [2024-12-09 16:33:43.466193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.398  [2024-12-09 16:33:43.466335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:23:14.398  [2024-12-09 16:33:43.466481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.074 ms
00:23:14.398  [2024-12-09 16:33:43.466520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.398  [2024-12-09 16:33:43.468318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.398  [2024-12-09 16:33:43.468454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:23:14.398  [2024-12-09 16:33:43.468549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.747 ms
00:23:14.398  [2024-12-09 16:33:43.468586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.657  [2024-12-09 16:33:43.680612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.657  [2024-12-09 16:33:43.680783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:23:14.657  [2024-12-09 16:33:43.680879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 212.320 ms
00:23:14.657  [2024-12-09 16:33:43.680937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.657  [2024-12-09 16:33:43.685823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.657  [2024-12-09 16:33:43.685971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:23:14.657  [2024-12-09 16:33:43.686048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.829 ms
00:23:14.657  [2024-12-09 16:33:43.686087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.657  [2024-12-09 16:33:43.721348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.657  [2024-12-09 16:33:43.721512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:23:14.657  [2024-12-09 16:33:43.721597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.214 ms
00:23:14.657  [2024-12-09 16:33:43.721633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.657  [2024-12-09 16:33:43.742172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.657  [2024-12-09 16:33:43.742304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:23:14.657  [2024-12-09 16:33:43.742397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.502 ms
00:23:14.657  [2024-12-09 16:33:43.742433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.657  [2024-12-09 16:33:43.742593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.657  [2024-12-09 16:33:43.742636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:23:14.657  [2024-12-09 16:33:43.742732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.099 ms
00:23:14.657  [2024-12-09 16:33:43.742768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.657  [2024-12-09 16:33:43.777089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.657  [2024-12-09 16:33:43.777236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:23:14.657  [2024-12-09 16:33:43.777361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.331 ms
00:23:14.657  [2024-12-09 16:33:43.777398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.657  [2024-12-09 16:33:43.810604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.657  [2024-12-09 16:33:43.810735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:23:14.657  [2024-12-09 16:33:43.810826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.200 ms
00:23:14.657  [2024-12-09 16:33:43.810861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.918  [2024-12-09 16:33:43.844514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.918  [2024-12-09 16:33:43.844673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:23:14.918  [2024-12-09 16:33:43.844779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.638 ms
00:23:14.918  [2024-12-09 16:33:43.844816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.918  [2024-12-09 16:33:43.877970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.918  [2024-12-09 16:33:43.878117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:23:14.918  [2024-12-09 16:33:43.878232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.075 ms
00:23:14.918  [2024-12-09 16:33:43.878268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.918  [2024-12-09 16:33:43.878361] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:23:14.918  [2024-12-09 16:33:43.878406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.878458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.878566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.878619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.878667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.878717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.878838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.878948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.878998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.918  [2024-12-09 16:33:43.879718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.879992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:23:14.919  [2024-12-09 16:33:43.880145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:23:14.919  [2024-12-09 16:33:43.880158] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         8e861fa4-50f5-4800-bacd-110f7b2ac5e9
00:23:14.919  [2024-12-09 16:33:43.880172] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:23:14.919  [2024-12-09 16:33:43.880184] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:23:14.919  [2024-12-09 16:33:43.880193] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:23:14.919  [2024-12-09 16:33:43.880206] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:23:14.919  [2024-12-09 16:33:43.880215] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:23:14.919  [2024-12-09 16:33:43.880228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:23:14.919  [2024-12-09 16:33:43.880238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:23:14.919  [2024-12-09 16:33:43.880251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:23:14.919  [2024-12-09 16:33:43.880260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:23:14.919  [2024-12-09 16:33:43.880273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.919  [2024-12-09 16:33:43.880283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:23:14.919  [2024-12-09 16:33:43.880296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.917 ms
00:23:14.919  [2024-12-09 16:33:43.880306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.919  [2024-12-09 16:33:43.899431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.919  [2024-12-09 16:33:43.899467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:23:14.919  [2024-12-09 16:33:43.899482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.088 ms
00:23:14.919  [2024-12-09 16:33:43.899491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.919  [2024-12-09 16:33:43.900065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.919  [2024-12-09 16:33:43.900079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:23:14.919  [2024-12-09 16:33:43.900092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.551 ms
00:23:14.919  [2024-12-09 16:33:43.900102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.919  [2024-12-09 16:33:43.950613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:14.919  [2024-12-09 16:33:43.950647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:14.919  [2024-12-09 16:33:43.950664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:14.919  [2024-12-09 16:33:43.950674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.919  [2024-12-09 16:33:43.950724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:14.919  [2024-12-09 16:33:43.950734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:14.919  [2024-12-09 16:33:43.950746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:14.919  [2024-12-09 16:33:43.950756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.919  [2024-12-09 16:33:43.950835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:14.919  [2024-12-09 16:33:43.950848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:14.919  [2024-12-09 16:33:43.950859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:14.919  [2024-12-09 16:33:43.950869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.919  [2024-12-09 16:33:43.950887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:14.919  [2024-12-09 16:33:43.950914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:14.919  [2024-12-09 16:33:43.950926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:14.919  [2024-12-09 16:33:43.950952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.919  [2024-12-09 16:33:44.067356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:14.919  [2024-12-09 16:33:44.067406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:14.919  [2024-12-09 16:33:44.067424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:14.919  [2024-12-09 16:33:44.067434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.179  [2024-12-09 16:33:44.161527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:15.179  [2024-12-09 16:33:44.161573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:15.179  [2024-12-09 16:33:44.161590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:15.179  [2024-12-09 16:33:44.161615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.179  [2024-12-09 16:33:44.161729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:15.179  [2024-12-09 16:33:44.161742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:15.179  [2024-12-09 16:33:44.161755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:15.179  [2024-12-09 16:33:44.161765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.179  [2024-12-09 16:33:44.161814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:15.179  [2024-12-09 16:33:44.161826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:15.179  [2024-12-09 16:33:44.161844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:15.179  [2024-12-09 16:33:44.161854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.179  [2024-12-09 16:33:44.161989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:15.179  [2024-12-09 16:33:44.162006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:15.179  [2024-12-09 16:33:44.162022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:15.179  [2024-12-09 16:33:44.162032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.179  [2024-12-09 16:33:44.162072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:15.179  [2024-12-09 16:33:44.162084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:23:15.179  [2024-12-09 16:33:44.162097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:15.179  [2024-12-09 16:33:44.162107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.179  [2024-12-09 16:33:44.162165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:15.179  [2024-12-09 16:33:44.162182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:15.179  [2024-12-09 16:33:44.162194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:15.179  [2024-12-09 16:33:44.162219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.179  [2024-12-09 16:33:44.162300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:15.179  [2024-12-09 16:33:44.162314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:15.179  [2024-12-09 16:33:44.162327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:15.179  [2024-12-09 16:33:44.162337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.179  [2024-12-09 16:33:44.162488] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 701.922 ms, result 0
00:23:15.179  true
00:23:15.179   16:33:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78878
00:23:15.179   16:33:44 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78878 ']'
00:23:15.179   16:33:44 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78878
00:23:15.179    16:33:44 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname
00:23:15.179   16:33:44 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:15.179    16:33:44 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78878
00:23:15.179  killing process with pid 78878
00:23:15.179  Received shutdown signal, test time was about 4.000000 seconds
00:23:15.179  
00:23:15.179                                                                                                  Latency(us)
00:23:15.179  
[2024-12-09T16:33:44.358Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:15.179  
[2024-12-09T16:33:44.358Z]  ===================================================================================================================
00:23:15.179  
[2024-12-09T16:33:44.358Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:23:15.179   16:33:44 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:15.179   16:33:44 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:15.179   16:33:44 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78878'
00:23:15.179   16:33:44 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78878
00:23:15.179   16:33:44 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78878
00:23:19.372  Remove shared memory files
00:23:19.372   16:33:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:23:19.372   16:33:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm
00:23:19.372   16:33:47 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files
00:23:19.372   16:33:47 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f
00:23:19.372   16:33:47 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f
00:23:19.372   16:33:47 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f
00:23:19.372   16:33:47 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:23:19.372   16:33:47 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f
00:23:19.372  ************************************
00:23:19.372  END TEST ftl_bdevperf
00:23:19.372  ************************************
00:23:19.372  
00:23:19.372  real	0m25.643s
00:23:19.372  user	0m28.090s
00:23:19.372  sys	0m1.205s
00:23:19.372   16:33:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:23:19.372   16:33:47 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:23:19.372   16:33:47 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0
00:23:19.372   16:33:47 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:23:19.372   16:33:47 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:23:19.372   16:33:47 ftl -- common/autotest_common.sh@10 -- # set +x
00:23:19.372  ************************************
00:23:19.372  START TEST ftl_trim
00:23:19.372  ************************************
00:23:19.372   16:33:47 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0
00:23:19.372  * Looking for test storage...
00:23:19.372  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:23:19.372    16:33:48 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:23:19.372     16:33:48 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version
00:23:19.372     16:33:48 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:23:19.372    16:33:48 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-:
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-:
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<'
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 ))
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:23:19.372     16:33:48 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1
00:23:19.372     16:33:48 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1
00:23:19.372     16:33:48 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:23:19.372     16:33:48 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1
00:23:19.372     16:33:48 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2
00:23:19.372     16:33:48 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2
00:23:19.372     16:33:48 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:23:19.372     16:33:48 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:23:19.372    16:33:48 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0
00:23:19.372    16:33:48 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:23:19.372    16:33:48 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:23:19.373  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:19.373  		--rc genhtml_branch_coverage=1
00:23:19.373  		--rc genhtml_function_coverage=1
00:23:19.373  		--rc genhtml_legend=1
00:23:19.373  		--rc geninfo_all_blocks=1
00:23:19.373  		--rc geninfo_unexecuted_blocks=1
00:23:19.373  		
00:23:19.373  		'
00:23:19.373    16:33:48 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:23:19.373  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:19.373  		--rc genhtml_branch_coverage=1
00:23:19.373  		--rc genhtml_function_coverage=1
00:23:19.373  		--rc genhtml_legend=1
00:23:19.373  		--rc geninfo_all_blocks=1
00:23:19.373  		--rc geninfo_unexecuted_blocks=1
00:23:19.373  		
00:23:19.373  		'
00:23:19.373    16:33:48 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:23:19.373  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:19.373  		--rc genhtml_branch_coverage=1
00:23:19.373  		--rc genhtml_function_coverage=1
00:23:19.373  		--rc genhtml_legend=1
00:23:19.373  		--rc geninfo_all_blocks=1
00:23:19.373  		--rc geninfo_unexecuted_blocks=1
00:23:19.373  		
00:23:19.373  		'
00:23:19.373    16:33:48 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:23:19.373  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:19.373  		--rc genhtml_branch_coverage=1
00:23:19.373  		--rc genhtml_function_coverage=1
00:23:19.373  		--rc genhtml_legend=1
00:23:19.373  		--rc geninfo_all_blocks=1
00:23:19.373  		--rc geninfo_unexecuted_blocks=1
00:23:19.373  		
00:23:19.373  		'
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:23:19.373      16:33:48 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh
00:23:19.373     16:33:48 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:23:19.373     16:33:48 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid=
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:23:19.373    16:33:48 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]]
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79242
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79242
00:23:19.373   16:33:48 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79242 ']'
00:23:19.373   16:33:48 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:19.373   16:33:48 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:19.373   16:33:48 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:19.373  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:19.373   16:33:48 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:23:19.373   16:33:48 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:19.373   16:33:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:23:19.373  [2024-12-09 16:33:48.352033] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:23:19.373  [2024-12-09 16:33:48.352145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79242 ]
00:23:19.373  [2024-12-09 16:33:48.528074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:23:19.632  [2024-12-09 16:33:48.634944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:23:19.632  [2024-12-09 16:33:48.635065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:19.632  [2024-12-09 16:33:48.635096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:23:20.570   16:33:49 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:20.570   16:33:49 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0
00:23:20.570    16:33:49 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:23:20.570    16:33:49 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0
00:23:20.570    16:33:49 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:23:20.570    16:33:49 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424
00:23:20.570    16:33:49 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev
00:23:20.570     16:33:49 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:23:20.570    16:33:49 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:23:20.570    16:33:49 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size
00:23:20.828     16:33:49 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:23:20.828     16:33:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:23:20.828     16:33:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:23:20.828     16:33:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:23:20.828     16:33:49 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:23:20.828      16:33:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:23:20.828     16:33:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:23:20.828    {
00:23:20.828      "name": "nvme0n1",
00:23:20.828      "aliases": [
00:23:20.828        "de2a384d-7086-4ead-a18f-5730f02daa08"
00:23:20.828      ],
00:23:20.828      "product_name": "NVMe disk",
00:23:20.828      "block_size": 4096,
00:23:20.828      "num_blocks": 1310720,
00:23:20.828      "uuid": "de2a384d-7086-4ead-a18f-5730f02daa08",
00:23:20.828      "numa_id": -1,
00:23:20.828      "assigned_rate_limits": {
00:23:20.828        "rw_ios_per_sec": 0,
00:23:20.828        "rw_mbytes_per_sec": 0,
00:23:20.828        "r_mbytes_per_sec": 0,
00:23:20.828        "w_mbytes_per_sec": 0
00:23:20.828      },
00:23:20.828      "claimed": true,
00:23:20.828      "claim_type": "read_many_write_one",
00:23:20.828      "zoned": false,
00:23:20.828      "supported_io_types": {
00:23:20.828        "read": true,
00:23:20.828        "write": true,
00:23:20.828        "unmap": true,
00:23:20.828        "flush": true,
00:23:20.828        "reset": true,
00:23:20.828        "nvme_admin": true,
00:23:20.828        "nvme_io": true,
00:23:20.828        "nvme_io_md": false,
00:23:20.828        "write_zeroes": true,
00:23:20.828        "zcopy": false,
00:23:20.828        "get_zone_info": false,
00:23:20.828        "zone_management": false,
00:23:20.828        "zone_append": false,
00:23:20.828        "compare": true,
00:23:20.828        "compare_and_write": false,
00:23:20.828        "abort": true,
00:23:20.828        "seek_hole": false,
00:23:20.828        "seek_data": false,
00:23:20.829        "copy": true,
00:23:20.829        "nvme_iov_md": false
00:23:20.829      },
00:23:20.829      "driver_specific": {
00:23:20.829        "nvme": [
00:23:20.829          {
00:23:20.829            "pci_address": "0000:00:11.0",
00:23:20.829            "trid": {
00:23:20.829              "trtype": "PCIe",
00:23:20.829              "traddr": "0000:00:11.0"
00:23:20.829            },
00:23:20.829            "ctrlr_data": {
00:23:20.829              "cntlid": 0,
00:23:20.829              "vendor_id": "0x1b36",
00:23:20.829              "model_number": "QEMU NVMe Ctrl",
00:23:20.829              "serial_number": "12341",
00:23:20.829              "firmware_revision": "8.0.0",
00:23:20.829              "subnqn": "nqn.2019-08.org.qemu:12341",
00:23:20.829              "oacs": {
00:23:20.829                "security": 0,
00:23:20.829                "format": 1,
00:23:20.829                "firmware": 0,
00:23:20.829                "ns_manage": 1
00:23:20.829              },
00:23:20.829              "multi_ctrlr": false,
00:23:20.829              "ana_reporting": false
00:23:20.829            },
00:23:20.829            "vs": {
00:23:20.829              "nvme_version": "1.4"
00:23:20.829            },
00:23:20.829            "ns_data": {
00:23:20.829              "id": 1,
00:23:20.829              "can_share": false
00:23:20.829            }
00:23:20.829          }
00:23:20.829        ],
00:23:20.829        "mp_policy": "active_passive"
00:23:20.829      }
00:23:20.829    }
00:23:20.829  ]'
00:23:20.829      16:33:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:23:20.829     16:33:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:23:20.829      16:33:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:23:21.103     16:33:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720
00:23:21.103     16:33:50 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:23:21.103     16:33:50 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120
00:23:21.103    16:33:50 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120
00:23:21.103    16:33:50 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:23:21.103    16:33:50 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols
00:23:21.103     16:33:50 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:23:21.103     16:33:50 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:23:21.103    16:33:50 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=34a50bd0-c10d-447e-b805-73db68c4f6cc
00:23:21.103    16:33:50 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores
00:23:21.103    16:33:50 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 34a50bd0-c10d-447e-b805-73db68c4f6cc
00:23:21.386     16:33:50 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:23:21.673    16:33:50 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=48feb82e-8956-4a4c-a210-ee79f0fca43a
00:23:21.673    16:33:50 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 48feb82e-8956-4a4c-a210-ee79f0fca43a
00:23:21.943   16:33:50 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:21.943    16:33:50 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:21.943    16:33:50 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0
00:23:21.943    16:33:50 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:23:21.943    16:33:50 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:21.943    16:33:50 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size=
00:23:21.943     16:33:50 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:21.943     16:33:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:21.943     16:33:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:23:21.943     16:33:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:23:21.943     16:33:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:23:21.943      16:33:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:21.943     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:23:21.943    {
00:23:21.943      "name": "63949ebf-ab29-4492-8e4d-77fdf4fca543",
00:23:21.943      "aliases": [
00:23:21.943        "lvs/nvme0n1p0"
00:23:21.943      ],
00:23:21.943      "product_name": "Logical Volume",
00:23:21.943      "block_size": 4096,
00:23:21.943      "num_blocks": 26476544,
00:23:21.943      "uuid": "63949ebf-ab29-4492-8e4d-77fdf4fca543",
00:23:21.943      "assigned_rate_limits": {
00:23:21.943        "rw_ios_per_sec": 0,
00:23:21.943        "rw_mbytes_per_sec": 0,
00:23:21.943        "r_mbytes_per_sec": 0,
00:23:21.943        "w_mbytes_per_sec": 0
00:23:21.943      },
00:23:21.943      "claimed": false,
00:23:21.943      "zoned": false,
00:23:21.943      "supported_io_types": {
00:23:21.943        "read": true,
00:23:21.943        "write": true,
00:23:21.943        "unmap": true,
00:23:21.943        "flush": false,
00:23:21.943        "reset": true,
00:23:21.943        "nvme_admin": false,
00:23:21.943        "nvme_io": false,
00:23:21.943        "nvme_io_md": false,
00:23:21.943        "write_zeroes": true,
00:23:21.943        "zcopy": false,
00:23:21.943        "get_zone_info": false,
00:23:21.943        "zone_management": false,
00:23:21.943        "zone_append": false,
00:23:21.943        "compare": false,
00:23:21.943        "compare_and_write": false,
00:23:21.943        "abort": false,
00:23:21.943        "seek_hole": true,
00:23:21.943        "seek_data": true,
00:23:21.943        "copy": false,
00:23:21.943        "nvme_iov_md": false
00:23:21.943      },
00:23:21.943      "driver_specific": {
00:23:21.943        "lvol": {
00:23:21.943          "lvol_store_uuid": "48feb82e-8956-4a4c-a210-ee79f0fca43a",
00:23:21.943          "base_bdev": "nvme0n1",
00:23:21.943          "thin_provision": true,
00:23:21.943          "num_allocated_clusters": 0,
00:23:21.943          "snapshot": false,
00:23:21.943          "clone": false,
00:23:21.943          "esnap_clone": false
00:23:21.943        }
00:23:21.943      }
00:23:21.943    }
00:23:21.943  ]'
00:23:21.944      16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:23:21.944     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:23:21.944      16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:23:22.203     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544
00:23:22.203     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:23:22.203     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424
00:23:22.203    16:33:51 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171
00:23:22.203    16:33:51 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev
00:23:22.203     16:33:51 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:23:22.462    16:33:51 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:23:22.462    16:33:51 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]]
00:23:22.462     16:33:51 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:22.462     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:22.462     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:23:22.462     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:23:22.462     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:23:22.462      16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:22.462     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:23:22.462    {
00:23:22.462      "name": "63949ebf-ab29-4492-8e4d-77fdf4fca543",
00:23:22.462      "aliases": [
00:23:22.462        "lvs/nvme0n1p0"
00:23:22.462      ],
00:23:22.462      "product_name": "Logical Volume",
00:23:22.462      "block_size": 4096,
00:23:22.462      "num_blocks": 26476544,
00:23:22.462      "uuid": "63949ebf-ab29-4492-8e4d-77fdf4fca543",
00:23:22.462      "assigned_rate_limits": {
00:23:22.462        "rw_ios_per_sec": 0,
00:23:22.462        "rw_mbytes_per_sec": 0,
00:23:22.462        "r_mbytes_per_sec": 0,
00:23:22.462        "w_mbytes_per_sec": 0
00:23:22.462      },
00:23:22.462      "claimed": false,
00:23:22.462      "zoned": false,
00:23:22.462      "supported_io_types": {
00:23:22.462        "read": true,
00:23:22.462        "write": true,
00:23:22.462        "unmap": true,
00:23:22.462        "flush": false,
00:23:22.462        "reset": true,
00:23:22.462        "nvme_admin": false,
00:23:22.462        "nvme_io": false,
00:23:22.462        "nvme_io_md": false,
00:23:22.462        "write_zeroes": true,
00:23:22.462        "zcopy": false,
00:23:22.462        "get_zone_info": false,
00:23:22.462        "zone_management": false,
00:23:22.462        "zone_append": false,
00:23:22.462        "compare": false,
00:23:22.462        "compare_and_write": false,
00:23:22.462        "abort": false,
00:23:22.462        "seek_hole": true,
00:23:22.462        "seek_data": true,
00:23:22.462        "copy": false,
00:23:22.462        "nvme_iov_md": false
00:23:22.462      },
00:23:22.462      "driver_specific": {
00:23:22.462        "lvol": {
00:23:22.462          "lvol_store_uuid": "48feb82e-8956-4a4c-a210-ee79f0fca43a",
00:23:22.462          "base_bdev": "nvme0n1",
00:23:22.462          "thin_provision": true,
00:23:22.462          "num_allocated_clusters": 0,
00:23:22.462          "snapshot": false,
00:23:22.462          "clone": false,
00:23:22.462          "esnap_clone": false
00:23:22.462        }
00:23:22.462      }
00:23:22.462    }
00:23:22.462  ]'
00:23:22.462      16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:23:22.721     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:23:22.721      16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:23:22.721     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544
00:23:22.721     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:23:22.721     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424
00:23:22.721    16:33:51 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171
00:23:22.721    16:33:51 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:23:22.980   16:33:51 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0
00:23:22.980   16:33:51 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60
00:23:22.980    16:33:51 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:22.981    16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:22.981    16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:23:22.981    16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:23:22.981    16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:23:22.981     16:33:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63949ebf-ab29-4492-8e4d-77fdf4fca543
00:23:22.981    16:33:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:23:22.981    {
00:23:22.981      "name": "63949ebf-ab29-4492-8e4d-77fdf4fca543",
00:23:22.981      "aliases": [
00:23:22.981        "lvs/nvme0n1p0"
00:23:22.981      ],
00:23:22.981      "product_name": "Logical Volume",
00:23:22.981      "block_size": 4096,
00:23:22.981      "num_blocks": 26476544,
00:23:22.981      "uuid": "63949ebf-ab29-4492-8e4d-77fdf4fca543",
00:23:22.981      "assigned_rate_limits": {
00:23:22.981        "rw_ios_per_sec": 0,
00:23:22.981        "rw_mbytes_per_sec": 0,
00:23:22.981        "r_mbytes_per_sec": 0,
00:23:22.981        "w_mbytes_per_sec": 0
00:23:22.981      },
00:23:22.981      "claimed": false,
00:23:22.981      "zoned": false,
00:23:22.981      "supported_io_types": {
00:23:22.981        "read": true,
00:23:22.981        "write": true,
00:23:22.981        "unmap": true,
00:23:22.981        "flush": false,
00:23:22.981        "reset": true,
00:23:22.981        "nvme_admin": false,
00:23:22.981        "nvme_io": false,
00:23:22.981        "nvme_io_md": false,
00:23:22.981        "write_zeroes": true,
00:23:22.981        "zcopy": false,
00:23:22.981        "get_zone_info": false,
00:23:22.981        "zone_management": false,
00:23:22.981        "zone_append": false,
00:23:22.981        "compare": false,
00:23:22.981        "compare_and_write": false,
00:23:22.981        "abort": false,
00:23:22.981        "seek_hole": true,
00:23:22.981        "seek_data": true,
00:23:22.981        "copy": false,
00:23:22.981        "nvme_iov_md": false
00:23:22.981      },
00:23:22.981      "driver_specific": {
00:23:22.981        "lvol": {
00:23:22.981          "lvol_store_uuid": "48feb82e-8956-4a4c-a210-ee79f0fca43a",
00:23:22.981          "base_bdev": "nvme0n1",
00:23:22.981          "thin_provision": true,
00:23:22.981          "num_allocated_clusters": 0,
00:23:22.981          "snapshot": false,
00:23:22.981          "clone": false,
00:23:22.981          "esnap_clone": false
00:23:22.981        }
00:23:22.981      }
00:23:22.981    }
00:23:22.981  ]'
00:23:22.981     16:33:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:23:23.240    16:33:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:23:23.240     16:33:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:23:23.240    16:33:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544
00:23:23.240    16:33:52 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:23:23.240    16:33:52 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424
00:23:23.240   16:33:52 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60
00:23:23.240   16:33:52 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 63949ebf-ab29-4492-8e4d-77fdf4fca543 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10
00:23:23.240  [2024-12-09 16:33:52.398019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.240  [2024-12-09 16:33:52.398064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:23.240  [2024-12-09 16:33:52.398083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:23:23.240  [2024-12-09 16:33:52.398094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.240  [2024-12-09 16:33:52.401605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.240  [2024-12-09 16:33:52.401644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:23.240  [2024-12-09 16:33:52.401658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.463 ms
00:23:23.240  [2024-12-09 16:33:52.401669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.240  [2024-12-09 16:33:52.401845] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:23.240  [2024-12-09 16:33:52.402805] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:23.240  [2024-12-09 16:33:52.402838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.240  [2024-12-09 16:33:52.402849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:23.240  [2024-12-09 16:33:52.402863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.019 ms
00:23:23.240  [2024-12-09 16:33:52.402874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.240  [2024-12-09 16:33:52.403026] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e0d69a0d-a582-4620-86ef-b082c6824320
00:23:23.241  [2024-12-09 16:33:52.404502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.241  [2024-12-09 16:33:52.404538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:23:23.241  [2024-12-09 16:33:52.404551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.027 ms
00:23:23.241  [2024-12-09 16:33:52.404563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.241  [2024-12-09 16:33:52.412086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.241  [2024-12-09 16:33:52.412117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:23.241  [2024-12-09 16:33:52.412147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.396 ms
00:23:23.241  [2024-12-09 16:33:52.412160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.241  [2024-12-09 16:33:52.412328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.241  [2024-12-09 16:33:52.412346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:23.241  [2024-12-09 16:33:52.412357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.079 ms
00:23:23.241  [2024-12-09 16:33:52.412373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.241  [2024-12-09 16:33:52.412429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.241  [2024-12-09 16:33:52.412442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:23.241  [2024-12-09 16:33:52.412453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:23:23.241  [2024-12-09 16:33:52.412469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.241  [2024-12-09 16:33:52.412524] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:23:23.501  [2024-12-09 16:33:52.417398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.501  [2024-12-09 16:33:52.417429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:23.501  [2024-12-09 16:33:52.417460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.884 ms
00:23:23.501  [2024-12-09 16:33:52.417470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.501  [2024-12-09 16:33:52.417569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.501  [2024-12-09 16:33:52.417598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:23.501  [2024-12-09 16:33:52.417612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:23:23.501  [2024-12-09 16:33:52.417622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.501  [2024-12-09 16:33:52.417679] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:23:23.501  [2024-12-09 16:33:52.417824] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:23.501  [2024-12-09 16:33:52.417844] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:23.501  [2024-12-09 16:33:52.417857] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:23.501  [2024-12-09 16:33:52.417873] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:23.501  [2024-12-09 16:33:52.417885] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:23.501  [2024-12-09 16:33:52.417899] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:23:23.501  [2024-12-09 16:33:52.417924] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:23.501  [2024-12-09 16:33:52.417938] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:23.501  [2024-12-09 16:33:52.417951] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:23.501  [2024-12-09 16:33:52.417964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.501  [2024-12-09 16:33:52.417974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:23.501  [2024-12-09 16:33:52.417987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.286 ms
00:23:23.501  [2024-12-09 16:33:52.417997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.501  [2024-12-09 16:33:52.418110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.501  [2024-12-09 16:33:52.418126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:23.501  [2024-12-09 16:33:52.418139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.056 ms
00:23:23.501  [2024-12-09 16:33:52.418149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.501  [2024-12-09 16:33:52.418306] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:23.501  [2024-12-09 16:33:52.418318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:23.501  [2024-12-09 16:33:52.418331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:23.501  [2024-12-09 16:33:52.418341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:23.501  [2024-12-09 16:33:52.418363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:23:23.501  [2024-12-09 16:33:52.418384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:23.501  [2024-12-09 16:33:52.418396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:23.501  [2024-12-09 16:33:52.418418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:23.501  [2024-12-09 16:33:52.418427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:23:23.501  [2024-12-09 16:33:52.418441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:23.501  [2024-12-09 16:33:52.418450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:23.501  [2024-12-09 16:33:52.418462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:23:23.501  [2024-12-09 16:33:52.418471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:23.501  [2024-12-09 16:33:52.418495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:23:23.501  [2024-12-09 16:33:52.418506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:23.501  [2024-12-09 16:33:52.418527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:23.501  [2024-12-09 16:33:52.418548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:23.501  [2024-12-09 16:33:52.418557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:23.501  [2024-12-09 16:33:52.418577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:23.501  [2024-12-09 16:33:52.418589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:23.501  [2024-12-09 16:33:52.418610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:23.501  [2024-12-09 16:33:52.418619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:23.501  [2024-12-09 16:33:52.418639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:23.501  [2024-12-09 16:33:52.418652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:23.501  [2024-12-09 16:33:52.418674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:23.501  [2024-12-09 16:33:52.418683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:23:23.501  [2024-12-09 16:33:52.418695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:23.501  [2024-12-09 16:33:52.418704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:23.501  [2024-12-09 16:33:52.418716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:23:23.501  [2024-12-09 16:33:52.418725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:23.501  [2024-12-09 16:33:52.418747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:23:23.501  [2024-12-09 16:33:52.418758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418767] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:23.501  [2024-12-09 16:33:52.418779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:23.501  [2024-12-09 16:33:52.418789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:23.501  [2024-12-09 16:33:52.418802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:23.501  [2024-12-09 16:33:52.418812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:23.501  [2024-12-09 16:33:52.418826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:23.501  [2024-12-09 16:33:52.418835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:23.501  [2024-12-09 16:33:52.418847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:23.501  [2024-12-09 16:33:52.418856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:23.501  [2024-12-09 16:33:52.418868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:23.501  [2024-12-09 16:33:52.418878] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:23.501  [2024-12-09 16:33:52.418903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:23.501  [2024-12-09 16:33:52.418918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:23:23.501  [2024-12-09 16:33:52.418931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:23:23.502  [2024-12-09 16:33:52.418942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:23:23.502  [2024-12-09 16:33:52.418955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:23:23.502  [2024-12-09 16:33:52.418965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:23:23.502  [2024-12-09 16:33:52.418978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:23:23.502  [2024-12-09 16:33:52.418988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:23:23.502  [2024-12-09 16:33:52.419001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:23:23.502  [2024-12-09 16:33:52.419012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:23:23.502  [2024-12-09 16:33:52.419029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:23:23.502  [2024-12-09 16:33:52.419040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:23:23.502  [2024-12-09 16:33:52.419052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:23:23.502  [2024-12-09 16:33:52.419063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:23:23.502  [2024-12-09 16:33:52.419076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:23:23.502  [2024-12-09 16:33:52.419086] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:23.502  [2024-12-09 16:33:52.419103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:23.502  [2024-12-09 16:33:52.419114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:23.502  [2024-12-09 16:33:52.419127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:23.502  [2024-12-09 16:33:52.419142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:23.502  [2024-12-09 16:33:52.419155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:23.502  [2024-12-09 16:33:52.419166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:23.502  [2024-12-09 16:33:52.419179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:23.502  [2024-12-09 16:33:52.419190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.919 ms
00:23:23.502  [2024-12-09 16:33:52.419203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:23.502  [2024-12-09 16:33:52.419337] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:23:23.502  [2024-12-09 16:33:52.419355] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:23:27.701  [2024-12-09 16:33:56.197735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.197791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:23:27.701  [2024-12-09 16:33:56.197808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3784.532 ms
00:23:27.701  [2024-12-09 16:33:56.197822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.232821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.232869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:27.701  [2024-12-09 16:33:56.232883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.728 ms
00:23:27.701  [2024-12-09 16:33:56.232906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.233081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.233098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:27.701  [2024-12-09 16:33:56.233128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.056 ms
00:23:27.701  [2024-12-09 16:33:56.233145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.288564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.288607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:27.701  [2024-12-09 16:33:56.288621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 55.443 ms
00:23:27.701  [2024-12-09 16:33:56.288636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.288737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.288752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:27.701  [2024-12-09 16:33:56.288763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:23:27.701  [2024-12-09 16:33:56.288776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.289265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.289286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:27.701  [2024-12-09 16:33:56.289297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.449 ms
00:23:27.701  [2024-12-09 16:33:56.289310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.289440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.289454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:27.701  [2024-12-09 16:33:56.289483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.080 ms
00:23:27.701  [2024-12-09 16:33:56.289499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.310685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.310725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:27.701  [2024-12-09 16:33:56.310755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 21.169 ms
00:23:27.701  [2024-12-09 16:33:56.310768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.323235] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:23:27.701  [2024-12-09 16:33:56.339820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.339867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:27.701  [2024-12-09 16:33:56.339885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.920 ms
00:23:27.701  [2024-12-09 16:33:56.339903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.441633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.441690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:23:27.701  [2024-12-09 16:33:56.441725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 101.744 ms
00:23:27.701  [2024-12-09 16:33:56.441736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.442029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.442044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:27.701  [2024-12-09 16:33:56.442073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.152 ms
00:23:27.701  [2024-12-09 16:33:56.442083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.477964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.701  [2024-12-09 16:33:56.477999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:23:27.701  [2024-12-09 16:33:56.478016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.875 ms
00:23:27.701  [2024-12-09 16:33:56.478026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.701  [2024-12-09 16:33:56.512699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.702  [2024-12-09 16:33:56.512732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:23:27.702  [2024-12-09 16:33:56.512764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.623 ms
00:23:27.702  [2024-12-09 16:33:56.512774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.702  [2024-12-09 16:33:56.513554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.702  [2024-12-09 16:33:56.513576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:27.702  [2024-12-09 16:33:56.513590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.679 ms
00:23:27.702  [2024-12-09 16:33:56.513600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.702  [2024-12-09 16:33:56.621746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.702  [2024-12-09 16:33:56.621786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:23:27.702  [2024-12-09 16:33:56.621821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 108.233 ms
00:23:27.702  [2024-12-09 16:33:56.621832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.702  [2024-12-09 16:33:56.658578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.702  [2024-12-09 16:33:56.658613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:23:27.702  [2024-12-09 16:33:56.658645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.651 ms
00:23:27.702  [2024-12-09 16:33:56.658655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.702  [2024-12-09 16:33:56.693508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.702  [2024-12-09 16:33:56.693540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:23:27.702  [2024-12-09 16:33:56.693571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.807 ms
00:23:27.702  [2024-12-09 16:33:56.693580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.702  [2024-12-09 16:33:56.728255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.702  [2024-12-09 16:33:56.728303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:27.702  [2024-12-09 16:33:56.728335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.636 ms
00:23:27.702  [2024-12-09 16:33:56.728344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.702  [2024-12-09 16:33:56.728442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.702  [2024-12-09 16:33:56.728457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:27.702  [2024-12-09 16:33:56.728472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:23:27.702  [2024-12-09 16:33:56.728482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.702  [2024-12-09 16:33:56.728589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:27.702  [2024-12-09 16:33:56.728600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:27.702  [2024-12-09 16:33:56.728612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.032 ms
00:23:27.702  [2024-12-09 16:33:56.728622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:27.702  [2024-12-09 16:33:56.729664] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:27.702  [2024-12-09 16:33:56.733736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4338.440 ms, result 0
00:23:27.702  [2024-12-09 16:33:56.734808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:27.702  {
00:23:27.702    "name": "ftl0",
00:23:27.702    "uuid": "e0d69a0d-a582-4620-86ef-b082c6824320"
00:23:27.702  }
00:23:27.702   16:33:56 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0
00:23:27.702   16:33:56 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0
00:23:27.702   16:33:56 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:23:27.702   16:33:56 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i
00:23:27.702   16:33:56 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:23:27.702   16:33:56 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:23:27.702   16:33:56 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:23:27.962   16:33:56 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000
00:23:28.223  [
00:23:28.223    {
00:23:28.223      "name": "ftl0",
00:23:28.223      "aliases": [
00:23:28.223        "e0d69a0d-a582-4620-86ef-b082c6824320"
00:23:28.223      ],
00:23:28.223      "product_name": "FTL disk",
00:23:28.223      "block_size": 4096,
00:23:28.223      "num_blocks": 23592960,
00:23:28.223      "uuid": "e0d69a0d-a582-4620-86ef-b082c6824320",
00:23:28.223      "assigned_rate_limits": {
00:23:28.223        "rw_ios_per_sec": 0,
00:23:28.223        "rw_mbytes_per_sec": 0,
00:23:28.223        "r_mbytes_per_sec": 0,
00:23:28.223        "w_mbytes_per_sec": 0
00:23:28.223      },
00:23:28.223      "claimed": false,
00:23:28.223      "zoned": false,
00:23:28.223      "supported_io_types": {
00:23:28.223        "read": true,
00:23:28.223        "write": true,
00:23:28.223        "unmap": true,
00:23:28.223        "flush": true,
00:23:28.223        "reset": false,
00:23:28.223        "nvme_admin": false,
00:23:28.223        "nvme_io": false,
00:23:28.223        "nvme_io_md": false,
00:23:28.223        "write_zeroes": true,
00:23:28.223        "zcopy": false,
00:23:28.223        "get_zone_info": false,
00:23:28.223        "zone_management": false,
00:23:28.223        "zone_append": false,
00:23:28.223        "compare": false,
00:23:28.223        "compare_and_write": false,
00:23:28.223        "abort": false,
00:23:28.223        "seek_hole": false,
00:23:28.223        "seek_data": false,
00:23:28.223        "copy": false,
00:23:28.223        "nvme_iov_md": false
00:23:28.223      },
00:23:28.223      "driver_specific": {
00:23:28.223        "ftl": {
00:23:28.223          "base_bdev": "63949ebf-ab29-4492-8e4d-77fdf4fca543",
00:23:28.223          "cache": "nvc0n1p0"
00:23:28.223        }
00:23:28.223      }
00:23:28.223    }
00:23:28.223  ]
00:23:28.223   16:33:57 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0
00:23:28.223   16:33:57 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": ['
00:23:28.223   16:33:57 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:23:28.223   16:33:57 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}'
00:23:28.223    16:33:57 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0
00:23:28.483   16:33:57 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[
00:23:28.483    {
00:23:28.483      "name": "ftl0",
00:23:28.483      "aliases": [
00:23:28.483        "e0d69a0d-a582-4620-86ef-b082c6824320"
00:23:28.484      ],
00:23:28.484      "product_name": "FTL disk",
00:23:28.484      "block_size": 4096,
00:23:28.484      "num_blocks": 23592960,
00:23:28.484      "uuid": "e0d69a0d-a582-4620-86ef-b082c6824320",
00:23:28.484      "assigned_rate_limits": {
00:23:28.484        "rw_ios_per_sec": 0,
00:23:28.484        "rw_mbytes_per_sec": 0,
00:23:28.484        "r_mbytes_per_sec": 0,
00:23:28.484        "w_mbytes_per_sec": 0
00:23:28.484      },
00:23:28.484      "claimed": false,
00:23:28.484      "zoned": false,
00:23:28.484      "supported_io_types": {
00:23:28.484        "read": true,
00:23:28.484        "write": true,
00:23:28.484        "unmap": true,
00:23:28.484        "flush": true,
00:23:28.484        "reset": false,
00:23:28.484        "nvme_admin": false,
00:23:28.484        "nvme_io": false,
00:23:28.484        "nvme_io_md": false,
00:23:28.484        "write_zeroes": true,
00:23:28.484        "zcopy": false,
00:23:28.484        "get_zone_info": false,
00:23:28.484        "zone_management": false,
00:23:28.484        "zone_append": false,
00:23:28.484        "compare": false,
00:23:28.484        "compare_and_write": false,
00:23:28.484        "abort": false,
00:23:28.484        "seek_hole": false,
00:23:28.484        "seek_data": false,
00:23:28.484        "copy": false,
00:23:28.484        "nvme_iov_md": false
00:23:28.484      },
00:23:28.484      "driver_specific": {
00:23:28.484        "ftl": {
00:23:28.484          "base_bdev": "63949ebf-ab29-4492-8e4d-77fdf4fca543",
00:23:28.484          "cache": "nvc0n1p0"
00:23:28.484        }
00:23:28.484      }
00:23:28.484    }
00:23:28.484  ]'
00:23:28.484    16:33:57 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks'
00:23:28.484   16:33:57 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960
00:23:28.484   16:33:57 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:23:28.744  [2024-12-09 16:33:57.792530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.744  [2024-12-09 16:33:57.792579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:23:28.744  [2024-12-09 16:33:57.792598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:23:28.744  [2024-12-09 16:33:57.792616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.744  [2024-12-09 16:33:57.792679] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:23:28.744  [2024-12-09 16:33:57.796858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.744  [2024-12-09 16:33:57.796886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:23:28.744  [2024-12-09 16:33:57.796914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.164 ms
00:23:28.745  [2024-12-09 16:33:57.796925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.745  [2024-12-09 16:33:57.797898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.745  [2024-12-09 16:33:57.797926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:23:28.745  [2024-12-09 16:33:57.797941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.890 ms
00:23:28.745  [2024-12-09 16:33:57.797951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.745  [2024-12-09 16:33:57.800753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.745  [2024-12-09 16:33:57.800777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:23:28.745  [2024-12-09 16:33:57.800791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.756 ms
00:23:28.745  [2024-12-09 16:33:57.800801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.745  [2024-12-09 16:33:57.806623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.745  [2024-12-09 16:33:57.806664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:23:28.745  [2024-12-09 16:33:57.806695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.752 ms
00:23:28.745  [2024-12-09 16:33:57.806705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.745  [2024-12-09 16:33:57.842302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.745  [2024-12-09 16:33:57.842336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:23:28.745  [2024-12-09 16:33:57.842370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.520 ms
00:23:28.745  [2024-12-09 16:33:57.842380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.745  [2024-12-09 16:33:57.863780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.745  [2024-12-09 16:33:57.863814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:23:28.745  [2024-12-09 16:33:57.863846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 21.324 ms
00:23:28.745  [2024-12-09 16:33:57.863859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.745  [2024-12-09 16:33:57.864200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.745  [2024-12-09 16:33:57.864218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:23:28.745  [2024-12-09 16:33:57.864232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.228 ms
00:23:28.745  [2024-12-09 16:33:57.864243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.745  [2024-12-09 16:33:57.899689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.745  [2024-12-09 16:33:57.899722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:23:28.745  [2024-12-09 16:33:57.899752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.450 ms
00:23:28.745  [2024-12-09 16:33:57.899762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.006  [2024-12-09 16:33:57.934606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:29.006  [2024-12-09 16:33:57.934639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:23:29.006  [2024-12-09 16:33:57.934672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.791 ms
00:23:29.006  [2024-12-09 16:33:57.934681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.006  [2024-12-09 16:33:57.968994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:29.006  [2024-12-09 16:33:57.969034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:23:29.006  [2024-12-09 16:33:57.969066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.259 ms
00:23:29.006  [2024-12-09 16:33:57.969075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.006  [2024-12-09 16:33:58.003554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:29.006  [2024-12-09 16:33:58.003586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:23:29.006  [2024-12-09 16:33:58.003616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.327 ms
00:23:29.006  [2024-12-09 16:33:58.003625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.006  [2024-12-09 16:33:58.003731] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:23:29.006  [2024-12-09 16:33:58.003750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.006  [2024-12-09 16:33:58.003764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.006  [2024-12-09 16:33:58.003775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.006  [2024-12-09 16:33:58.003788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.006  [2024-12-09 16:33:58.003798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.006  [2024-12-09 16:33:58.003814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.006  [2024-12-09 16:33:58.003824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.006  [2024-12-09 16:33:58.003837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.006  [2024-12-09 16:33:58.003847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.003860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.003870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.003883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.003893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.003925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.003935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.003965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.003976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.003988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.003999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.007  [2024-12-09 16:33:58.004880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.008  [2024-12-09 16:33:58.004891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.008  [2024-12-09 16:33:58.004917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.008  [2024-12-09 16:33:58.004928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.008  [2024-12-09 16:33:58.004941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.008  [2024-12-09 16:33:58.004952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.008  [2024-12-09 16:33:58.004964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.008  [2024-12-09 16:33:58.004975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.008  [2024-12-09 16:33:58.004988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.008  [2024-12-09 16:33:58.005006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.008  [2024-12-09 16:33:58.005021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:23:29.008  [2024-12-09 16:33:58.005038] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:23:29.008  [2024-12-09 16:33:58.005053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         e0d69a0d-a582-4620-86ef-b082c6824320
00:23:29.008  [2024-12-09 16:33:58.005065] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:23:29.008  [2024-12-09 16:33:58.005198] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:23:29.008  [2024-12-09 16:33:58.005209] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:23:29.008  [2024-12-09 16:33:58.005224] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:23:29.008  [2024-12-09 16:33:58.005234] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:23:29.008  [2024-12-09 16:33:58.005246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:23:29.008  [2024-12-09 16:33:58.005256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:23:29.008  [2024-12-09 16:33:58.005267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:23:29.008  [2024-12-09 16:33:58.005276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:23:29.008  [2024-12-09 16:33:58.005289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:29.008  [2024-12-09 16:33:58.005299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:23:29.008  [2024-12-09 16:33:58.005312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.562 ms
00:23:29.008  [2024-12-09 16:33:58.005322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.008  [2024-12-09 16:33:58.024785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:29.008  [2024-12-09 16:33:58.024816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:23:29.008  [2024-12-09 16:33:58.024848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.442 ms
00:23:29.008  [2024-12-09 16:33:58.024858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.008  [2024-12-09 16:33:58.025443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:29.008  [2024-12-09 16:33:58.025459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:23:29.008  [2024-12-09 16:33:58.025472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.482 ms
00:23:29.008  [2024-12-09 16:33:58.025482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.008  [2024-12-09 16:33:58.092308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.008  [2024-12-09 16:33:58.092339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:29.008  [2024-12-09 16:33:58.092354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.008  [2024-12-09 16:33:58.092364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.008  [2024-12-09 16:33:58.092496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.008  [2024-12-09 16:33:58.092509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:29.008  [2024-12-09 16:33:58.092522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.008  [2024-12-09 16:33:58.092531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.008  [2024-12-09 16:33:58.092621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.008  [2024-12-09 16:33:58.092634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:29.008  [2024-12-09 16:33:58.092653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.008  [2024-12-09 16:33:58.092663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.008  [2024-12-09 16:33:58.092720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.008  [2024-12-09 16:33:58.092731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:29.008  [2024-12-09 16:33:58.092743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.008  [2024-12-09 16:33:58.092752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.269  [2024-12-09 16:33:58.216682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.269  [2024-12-09 16:33:58.216734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:29.269  [2024-12-09 16:33:58.216750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.269  [2024-12-09 16:33:58.216760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.269  [2024-12-09 16:33:58.315442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.269  [2024-12-09 16:33:58.315486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:29.269  [2024-12-09 16:33:58.315518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.269  [2024-12-09 16:33:58.315529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.269  [2024-12-09 16:33:58.315667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.269  [2024-12-09 16:33:58.315679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:29.269  [2024-12-09 16:33:58.315696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.269  [2024-12-09 16:33:58.315709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.269  [2024-12-09 16:33:58.315811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.269  [2024-12-09 16:33:58.315821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:29.269  [2024-12-09 16:33:58.315834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.269  [2024-12-09 16:33:58.315843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.269  [2024-12-09 16:33:58.316022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.269  [2024-12-09 16:33:58.316036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:29.269  [2024-12-09 16:33:58.316050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.269  [2024-12-09 16:33:58.316062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.269  [2024-12-09 16:33:58.316156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.269  [2024-12-09 16:33:58.316169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:23:29.269  [2024-12-09 16:33:58.316182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.269  [2024-12-09 16:33:58.316192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.269  [2024-12-09 16:33:58.316270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.269  [2024-12-09 16:33:58.316281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:29.269  [2024-12-09 16:33:58.316296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.269  [2024-12-09 16:33:58.316306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.269  [2024-12-09 16:33:58.316395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:29.269  [2024-12-09 16:33:58.316407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:29.269  [2024-12-09 16:33:58.316420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:29.269  [2024-12-09 16:33:58.316429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.269  [2024-12-09 16:33:58.316702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.005 ms, result 0
00:23:29.269  true
00:23:29.269   16:33:58 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79242
00:23:29.269   16:33:58 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79242 ']'
00:23:29.269   16:33:58 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79242
00:23:29.269    16:33:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname
00:23:29.269   16:33:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:29.269    16:33:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79242
00:23:29.269   16:33:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:29.269   16:33:58 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:29.269  killing process with pid 79242
00:23:29.269   16:33:58 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79242'
00:23:29.269   16:33:58 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79242
00:23:29.269   16:33:58 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79242
00:23:34.550   16:34:03 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536
00:23:35.119  65536+0 records in
00:23:35.119  65536+0 records out
00:23:35.119  268435456 bytes (268 MB, 256 MiB) copied, 0.948747 s, 283 MB/s
00:23:35.119   16:34:04 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:23:35.119  [2024-12-09 16:34:04.225449] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:23:35.119  [2024-12-09 16:34:04.225579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79452 ]
00:23:35.379  [2024-12-09 16:34:04.409421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:35.379  [2024-12-09 16:34:04.512939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:35.949  [2024-12-09 16:34:04.858782] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:35.949  [2024-12-09 16:34:04.858854] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:35.949  [2024-12-09 16:34:05.020995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.021053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:35.949  [2024-12-09 16:34:05.021083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:23:35.949  [2024-12-09 16:34:05.021093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.024266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.024303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:35.949  [2024-12-09 16:34:05.024315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.158 ms
00:23:35.949  [2024-12-09 16:34:05.024324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.024431] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:35.949  [2024-12-09 16:34:05.025427] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:35.949  [2024-12-09 16:34:05.025464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.025475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:35.949  [2024-12-09 16:34:05.025485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.041 ms
00:23:35.949  [2024-12-09 16:34:05.025495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.026999] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:23:35.949  [2024-12-09 16:34:05.044992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.045036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:23:35.949  [2024-12-09 16:34:05.045066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.023 ms
00:23:35.949  [2024-12-09 16:34:05.045076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.045186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.045200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:23:35.949  [2024-12-09 16:34:05.045212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.034 ms
00:23:35.949  [2024-12-09 16:34:05.045221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.052135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.052164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:35.949  [2024-12-09 16:34:05.052175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.881 ms
00:23:35.949  [2024-12-09 16:34:05.052185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.052311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.052325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:35.949  [2024-12-09 16:34:05.052337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:23:35.949  [2024-12-09 16:34:05.052347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.052378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.052389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:35.949  [2024-12-09 16:34:05.052400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:23:35.949  [2024-12-09 16:34:05.052410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.052432] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:23:35.949  [2024-12-09 16:34:05.057246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.057280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:35.949  [2024-12-09 16:34:05.057292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.826 ms
00:23:35.949  [2024-12-09 16:34:05.057303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.057371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.057384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:35.949  [2024-12-09 16:34:05.057395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:23:35.949  [2024-12-09 16:34:05.057406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.057433] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:23:35.949  [2024-12-09 16:34:05.057456] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:23:35.949  [2024-12-09 16:34:05.057490] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:23:35.949  [2024-12-09 16:34:05.057508] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:23:35.949  [2024-12-09 16:34:05.057598] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:35.949  [2024-12-09 16:34:05.057611] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:35.949  [2024-12-09 16:34:05.057624] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:35.949  [2024-12-09 16:34:05.057641] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:35.949  [2024-12-09 16:34:05.057653] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:35.949  [2024-12-09 16:34:05.057664] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:23:35.949  [2024-12-09 16:34:05.057674] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:35.949  [2024-12-09 16:34:05.057684] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:35.949  [2024-12-09 16:34:05.057694] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:35.949  [2024-12-09 16:34:05.057704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.057714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:35.949  [2024-12-09 16:34:05.057725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.274 ms
00:23:35.949  [2024-12-09 16:34:05.057734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.057810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.949  [2024-12-09 16:34:05.057831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:35.949  [2024-12-09 16:34:05.057842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.054 ms
00:23:35.949  [2024-12-09 16:34:05.057851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.949  [2024-12-09 16:34:05.057960] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:35.949  [2024-12-09 16:34:05.057974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:35.949  [2024-12-09 16:34:05.057985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:35.949  [2024-12-09 16:34:05.057995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:35.949  [2024-12-09 16:34:05.058006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:35.949  [2024-12-09 16:34:05.058014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:35.949  [2024-12-09 16:34:05.058024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:23:35.949  [2024-12-09 16:34:05.058033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:35.949  [2024-12-09 16:34:05.058042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:23:35.949  [2024-12-09 16:34:05.058051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:35.949  [2024-12-09 16:34:05.058063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:35.949  [2024-12-09 16:34:05.058085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:23:35.949  [2024-12-09 16:34:05.058095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:35.949  [2024-12-09 16:34:05.058104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:35.949  [2024-12-09 16:34:05.058113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:23:35.949  [2024-12-09 16:34:05.058122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:35.949  [2024-12-09 16:34:05.058131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:35.949  [2024-12-09 16:34:05.058141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:23:35.949  [2024-12-09 16:34:05.058151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:35.949  [2024-12-09 16:34:05.058160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:35.950  [2024-12-09 16:34:05.058169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:23:35.950  [2024-12-09 16:34:05.058179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:35.950  [2024-12-09 16:34:05.058188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:35.950  [2024-12-09 16:34:05.058196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:23:35.950  [2024-12-09 16:34:05.058205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:35.950  [2024-12-09 16:34:05.058214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:35.950  [2024-12-09 16:34:05.058224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:23:35.950  [2024-12-09 16:34:05.058233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:35.950  [2024-12-09 16:34:05.058241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:35.950  [2024-12-09 16:34:05.058250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:23:35.950  [2024-12-09 16:34:05.058259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:35.950  [2024-12-09 16:34:05.058268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:35.950  [2024-12-09 16:34:05.058277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:23:35.950  [2024-12-09 16:34:05.058286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:35.950  [2024-12-09 16:34:05.058295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:35.950  [2024-12-09 16:34:05.058303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:23:35.950  [2024-12-09 16:34:05.058312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:35.950  [2024-12-09 16:34:05.058321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:35.950  [2024-12-09 16:34:05.058330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:23:35.950  [2024-12-09 16:34:05.058339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:35.950  [2024-12-09 16:34:05.058348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:35.950  [2024-12-09 16:34:05.058357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:23:35.950  [2024-12-09 16:34:05.058366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:35.950  [2024-12-09 16:34:05.058375] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:35.950  [2024-12-09 16:34:05.058385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:35.950  [2024-12-09 16:34:05.058398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:35.950  [2024-12-09 16:34:05.058409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:35.950  [2024-12-09 16:34:05.058418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:35.950  [2024-12-09 16:34:05.058428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:35.950  [2024-12-09 16:34:05.058437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:35.950  [2024-12-09 16:34:05.058446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:35.950  [2024-12-09 16:34:05.058455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:35.950  [2024-12-09 16:34:05.058465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:35.950  [2024-12-09 16:34:05.058476] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:35.950  [2024-12-09 16:34:05.058488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:35.950  [2024-12-09 16:34:05.058499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:23:35.950  [2024-12-09 16:34:05.058510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:23:35.950  [2024-12-09 16:34:05.058520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:23:35.950  [2024-12-09 16:34:05.058531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:23:35.950  [2024-12-09 16:34:05.058541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:23:35.950  [2024-12-09 16:34:05.058552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:23:35.950  [2024-12-09 16:34:05.058562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:23:35.950  [2024-12-09 16:34:05.058573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:23:35.950  [2024-12-09 16:34:05.058583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:23:35.950  [2024-12-09 16:34:05.058593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:23:35.950  [2024-12-09 16:34:05.058603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:23:35.950  [2024-12-09 16:34:05.058613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:23:35.950  [2024-12-09 16:34:05.058623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:23:35.950  [2024-12-09 16:34:05.058633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:23:35.950  [2024-12-09 16:34:05.058644] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:35.950  [2024-12-09 16:34:05.058655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:35.950  [2024-12-09 16:34:05.058666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:35.950  [2024-12-09 16:34:05.058676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:35.950  [2024-12-09 16:34:05.058686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:35.950  [2024-12-09 16:34:05.058697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:35.950  [2024-12-09 16:34:05.058707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.950  [2024-12-09 16:34:05.058721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:35.950  [2024-12-09 16:34:05.058731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.821 ms
00:23:35.950  [2024-12-09 16:34:05.058741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.950  [2024-12-09 16:34:05.097188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.950  [2024-12-09 16:34:05.097227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:35.950  [2024-12-09 16:34:05.097240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.450 ms
00:23:35.950  [2024-12-09 16:34:05.097251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:35.950  [2024-12-09 16:34:05.097370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:35.950  [2024-12-09 16:34:05.097383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:35.950  [2024-12-09 16:34:05.097393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.051 ms
00:23:35.950  [2024-12-09 16:34:05.097403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.163472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.163514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:36.209  [2024-12-09 16:34:05.163532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 66.154 ms
00:23:36.209  [2024-12-09 16:34:05.163543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.163658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.163672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:36.209  [2024-12-09 16:34:05.163683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:23:36.209  [2024-12-09 16:34:05.163693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.164157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.164179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:36.209  [2024-12-09 16:34:05.164196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.443 ms
00:23:36.209  [2024-12-09 16:34:05.164207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.164325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.164339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:36.209  [2024-12-09 16:34:05.164350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.093 ms
00:23:36.209  [2024-12-09 16:34:05.164360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.183357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.183394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:36.209  [2024-12-09 16:34:05.183407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.005 ms
00:23:36.209  [2024-12-09 16:34:05.183417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.201795] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4
00:23:36.209  [2024-12-09 16:34:05.201854] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:23:36.209  [2024-12-09 16:34:05.201885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.201896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:23:36.209  [2024-12-09 16:34:05.201907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.378 ms
00:23:36.209  [2024-12-09 16:34:05.201937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.229673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.229713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:23:36.209  [2024-12-09 16:34:05.229726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.698 ms
00:23:36.209  [2024-12-09 16:34:05.229736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.246857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.246891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:23:36.209  [2024-12-09 16:34:05.246934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.069 ms
00:23:36.209  [2024-12-09 16:34:05.246944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.263749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.263783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:23:36.209  [2024-12-09 16:34:05.263795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.757 ms
00:23:36.209  [2024-12-09 16:34:05.263803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.264614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.264647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:36.209  [2024-12-09 16:34:05.264660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.668 ms
00:23:36.209  [2024-12-09 16:34:05.264670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.346058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.346130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:23:36.209  [2024-12-09 16:34:05.346146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 81.491 ms
00:23:36.209  [2024-12-09 16:34:05.346157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.356079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:23:36.209  [2024-12-09 16:34:05.371465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.371510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:36.209  [2024-12-09 16:34:05.371524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.240 ms
00:23:36.209  [2024-12-09 16:34:05.371534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.371663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.371676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:23:36.209  [2024-12-09 16:34:05.371688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:23:36.209  [2024-12-09 16:34:05.371697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.371751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.371763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:36.209  [2024-12-09 16:34:05.371773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:23:36.209  [2024-12-09 16:34:05.371783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.209  [2024-12-09 16:34:05.371817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.209  [2024-12-09 16:34:05.371833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:36.209  [2024-12-09 16:34:05.371844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:23:36.210  [2024-12-09 16:34:05.371854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.210  [2024-12-09 16:34:05.371891] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:23:36.210  [2024-12-09 16:34:05.371945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.210  [2024-12-09 16:34:05.371956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:23:36.210  [2024-12-09 16:34:05.371966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.055 ms
00:23:36.210  [2024-12-09 16:34:05.371976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.468  [2024-12-09 16:34:05.405922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.468  [2024-12-09 16:34:05.405976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:36.468  [2024-12-09 16:34:05.405989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.975 ms
00:23:36.468  [2024-12-09 16:34:05.405999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.468  [2024-12-09 16:34:05.406122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:36.468  [2024-12-09 16:34:05.406135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:36.468  [2024-12-09 16:34:05.406146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.032 ms
00:23:36.468  [2024-12-09 16:34:05.406157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:36.468  [2024-12-09 16:34:05.407122] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:36.468  [2024-12-09 16:34:05.411334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.445 ms, result 0
00:23:36.469  [2024-12-09 16:34:05.412193] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:36.469  [2024-12-09 16:34:05.429948] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:37.407  
[2024-12-09T16:34:07.525Z] Copying: 21/256 [MB] (21 MBps)
[2024-12-09T16:34:08.463Z] Copying: 43/256 [MB] (21 MBps)
[2024-12-09T16:34:09.845Z] Copying: 65/256 [MB] (21 MBps)
[2024-12-09T16:34:10.783Z] Copying: 87/256 [MB] (21 MBps)
[2024-12-09T16:34:11.722Z] Copying: 108/256 [MB] (21 MBps)
[2024-12-09T16:34:12.660Z] Copying: 130/256 [MB] (21 MBps)
[2024-12-09T16:34:13.599Z] Copying: 152/256 [MB] (21 MBps)
[2024-12-09T16:34:14.537Z] Copying: 174/256 [MB] (21 MBps)
[2024-12-09T16:34:15.475Z] Copying: 196/256 [MB] (21 MBps)
[2024-12-09T16:34:16.858Z] Copying: 217/256 [MB] (21 MBps)
[2024-12-09T16:34:17.428Z] Copying: 238/256 [MB] (21 MBps)
[2024-12-09T16:34:17.428Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-09 16:34:17.203309] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:48.249  [2024-12-09 16:34:17.217963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.249  [2024-12-09 16:34:17.218003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:23:48.249  [2024-12-09 16:34:17.218018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:23:48.249  [2024-12-09 16:34:17.218034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.249  [2024-12-09 16:34:17.218058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:23:48.249  [2024-12-09 16:34:17.222174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.249  [2024-12-09 16:34:17.222203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:23:48.249  [2024-12-09 16:34:17.222214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.107 ms
00:23:48.249  [2024-12-09 16:34:17.222223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.249  [2024-12-09 16:34:17.224064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.249  [2024-12-09 16:34:17.224100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:23:48.249  [2024-12-09 16:34:17.224112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.804 ms
00:23:48.249  [2024-12-09 16:34:17.224123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.249  [2024-12-09 16:34:17.230724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.249  [2024-12-09 16:34:17.230768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:23:48.249  [2024-12-09 16:34:17.230780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.593 ms
00:23:48.249  [2024-12-09 16:34:17.230790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.249  [2024-12-09 16:34:17.236277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.249  [2024-12-09 16:34:17.236311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:23:48.249  [2024-12-09 16:34:17.236322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.378 ms
00:23:48.249  [2024-12-09 16:34:17.236332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.249  [2024-12-09 16:34:17.271646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.249  [2024-12-09 16:34:17.271685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:23:48.249  [2024-12-09 16:34:17.271699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.311 ms
00:23:48.250  [2024-12-09 16:34:17.271709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.250  [2024-12-09 16:34:17.291982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.250  [2024-12-09 16:34:17.292025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:23:48.250  [2024-12-09 16:34:17.292041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.221 ms
00:23:48.250  [2024-12-09 16:34:17.292067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.250  [2024-12-09 16:34:17.292198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.250  [2024-12-09 16:34:17.292214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:23:48.250  [2024-12-09 16:34:17.292225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.090 ms
00:23:48.250  [2024-12-09 16:34:17.292248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.250  [2024-12-09 16:34:17.328116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.250  [2024-12-09 16:34:17.328151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:23:48.250  [2024-12-09 16:34:17.328179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.908 ms
00:23:48.250  [2024-12-09 16:34:17.328189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.250  [2024-12-09 16:34:17.362622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.250  [2024-12-09 16:34:17.362660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:23:48.250  [2024-12-09 16:34:17.362672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.435 ms
00:23:48.250  [2024-12-09 16:34:17.362681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.250  [2024-12-09 16:34:17.396584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.250  [2024-12-09 16:34:17.396621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:23:48.250  [2024-12-09 16:34:17.396633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.871 ms
00:23:48.250  [2024-12-09 16:34:17.396642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.510  [2024-12-09 16:34:17.430243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.510  [2024-12-09 16:34:17.430282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:23:48.510  [2024-12-09 16:34:17.430293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.564 ms
00:23:48.511  [2024-12-09 16:34:17.430302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.511  [2024-12-09 16:34:17.430374] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:23:48.511  [2024-12-09 16:34:17.430390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.430992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.511  [2024-12-09 16:34:17.431233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:23:48.512  [2024-12-09 16:34:17.431473] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:23:48.512  [2024-12-09 16:34:17.431488] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         e0d69a0d-a582-4620-86ef-b082c6824320
00:23:48.512  [2024-12-09 16:34:17.431499] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:23:48.512  [2024-12-09 16:34:17.431510] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:23:48.512  [2024-12-09 16:34:17.431519] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:23:48.512  [2024-12-09 16:34:17.431529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:23:48.512  [2024-12-09 16:34:17.431538] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:23:48.512  [2024-12-09 16:34:17.431548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:23:48.512  [2024-12-09 16:34:17.431558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:23:48.512  [2024-12-09 16:34:17.431567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:23:48.512  [2024-12-09 16:34:17.431576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:23:48.512  [2024-12-09 16:34:17.431586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.512  [2024-12-09 16:34:17.431602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:23:48.512  [2024-12-09 16:34:17.431613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.215 ms
00:23:48.512  [2024-12-09 16:34:17.431623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.512  [2024-12-09 16:34:17.451263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.512  [2024-12-09 16:34:17.451299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:23:48.512  [2024-12-09 16:34:17.451311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.652 ms
00:23:48.512  [2024-12-09 16:34:17.451321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.512  [2024-12-09 16:34:17.451931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:48.512  [2024-12-09 16:34:17.451954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:23:48.512  [2024-12-09 16:34:17.451966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.572 ms
00:23:48.512  [2024-12-09 16:34:17.451976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.512  [2024-12-09 16:34:17.506508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.512  [2024-12-09 16:34:17.506544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:48.512  [2024-12-09 16:34:17.506556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.512  [2024-12-09 16:34:17.506566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.512  [2024-12-09 16:34:17.506675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.512  [2024-12-09 16:34:17.506687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:48.512  [2024-12-09 16:34:17.506698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.512  [2024-12-09 16:34:17.506707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.512  [2024-12-09 16:34:17.506755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.512  [2024-12-09 16:34:17.506767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:48.512  [2024-12-09 16:34:17.506777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.512  [2024-12-09 16:34:17.506787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.512  [2024-12-09 16:34:17.506805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.512  [2024-12-09 16:34:17.506820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:48.512  [2024-12-09 16:34:17.506830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.512  [2024-12-09 16:34:17.506840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.512  [2024-12-09 16:34:17.621713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.512  [2024-12-09 16:34:17.621763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:48.512  [2024-12-09 16:34:17.621777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.512  [2024-12-09 16:34:17.621786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.772  [2024-12-09 16:34:17.716567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.772  [2024-12-09 16:34:17.716618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:48.772  [2024-12-09 16:34:17.716632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.772  [2024-12-09 16:34:17.716642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.772  [2024-12-09 16:34:17.716716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.772  [2024-12-09 16:34:17.716727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:48.772  [2024-12-09 16:34:17.716738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.772  [2024-12-09 16:34:17.716748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.772  [2024-12-09 16:34:17.716775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.772  [2024-12-09 16:34:17.716786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:48.772  [2024-12-09 16:34:17.716803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.772  [2024-12-09 16:34:17.716813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.772  [2024-12-09 16:34:17.716930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.772  [2024-12-09 16:34:17.716944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:48.772  [2024-12-09 16:34:17.716954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.772  [2024-12-09 16:34:17.716964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.772  [2024-12-09 16:34:17.717032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.772  [2024-12-09 16:34:17.717044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:23:48.772  [2024-12-09 16:34:17.717055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.772  [2024-12-09 16:34:17.717070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.772  [2024-12-09 16:34:17.717110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.772  [2024-12-09 16:34:17.717121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:48.772  [2024-12-09 16:34:17.717131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.772  [2024-12-09 16:34:17.717141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.772  [2024-12-09 16:34:17.717184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:48.772  [2024-12-09 16:34:17.717195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:48.772  [2024-12-09 16:34:17.717209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:48.773  [2024-12-09 16:34:17.717220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:48.773  [2024-12-09 16:34:17.717356] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.195 ms, result 0
00:23:50.154  
00:23:50.154  
00:23:50.154   16:34:18 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79606
00:23:50.154   16:34:18 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init
00:23:50.154   16:34:18 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79606
00:23:50.154   16:34:18 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79606 ']'
00:23:50.154   16:34:18 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:50.154   16:34:18 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:50.154  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:50.154   16:34:18 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:50.154   16:34:18 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:50.154   16:34:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:23:50.154  [2024-12-09 16:34:19.026329] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:23:50.154  [2024-12-09 16:34:19.026450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79606 ]
00:23:50.154  [2024-12-09 16:34:19.203999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:50.154  [2024-12-09 16:34:19.308978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:51.092   16:34:20 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:51.092   16:34:20 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0
00:23:51.092   16:34:20 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config
00:23:51.352  [2024-12-09 16:34:20.360480] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:51.352  [2024-12-09 16:34:20.360546] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:51.613  [2024-12-09 16:34:20.544300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.613  [2024-12-09 16:34:20.544350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:51.613  [2024-12-09 16:34:20.544370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:23:51.613  [2024-12-09 16:34:20.544381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.613  [2024-12-09 16:34:20.548247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.613  [2024-12-09 16:34:20.548286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:51.613  [2024-12-09 16:34:20.548317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.851 ms
00:23:51.613  [2024-12-09 16:34:20.548327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.613  [2024-12-09 16:34:20.548432] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:51.613  [2024-12-09 16:34:20.549386] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:51.613  [2024-12-09 16:34:20.549425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.613  [2024-12-09 16:34:20.549437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:51.613  [2024-12-09 16:34:20.549450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.006 ms
00:23:51.613  [2024-12-09 16:34:20.549460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.614  [2024-12-09 16:34:20.551147] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:23:51.614  [2024-12-09 16:34:20.570832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.614  [2024-12-09 16:34:20.570877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:23:51.614  [2024-12-09 16:34:20.570892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.720 ms
00:23:51.614  [2024-12-09 16:34:20.570914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.614  [2024-12-09 16:34:20.571012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.614  [2024-12-09 16:34:20.571029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:23:51.614  [2024-12-09 16:34:20.571041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.024 ms
00:23:51.614  [2024-12-09 16:34:20.571054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.614  [2024-12-09 16:34:20.577865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.614  [2024-12-09 16:34:20.577921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:51.614  [2024-12-09 16:34:20.577934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.774 ms
00:23:51.614  [2024-12-09 16:34:20.577947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.614  [2024-12-09 16:34:20.578065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.614  [2024-12-09 16:34:20.578083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:51.614  [2024-12-09 16:34:20.578110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.083 ms
00:23:51.614  [2024-12-09 16:34:20.578127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.614  [2024-12-09 16:34:20.578154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.614  [2024-12-09 16:34:20.578168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:51.614  [2024-12-09 16:34:20.578180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:23:51.614  [2024-12-09 16:34:20.578192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.614  [2024-12-09 16:34:20.578216] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:23:51.614  [2024-12-09 16:34:20.583009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.614  [2024-12-09 16:34:20.583039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:51.614  [2024-12-09 16:34:20.583054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.802 ms
00:23:51.614  [2024-12-09 16:34:20.583063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.614  [2024-12-09 16:34:20.583133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.614  [2024-12-09 16:34:20.583145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:51.614  [2024-12-09 16:34:20.583157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:23:51.614  [2024-12-09 16:34:20.583169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.614  [2024-12-09 16:34:20.583192] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:23:51.614  [2024-12-09 16:34:20.583216] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:23:51.614  [2024-12-09 16:34:20.583261] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:23:51.614  [2024-12-09 16:34:20.583279] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:23:51.614  [2024-12-09 16:34:20.583403] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:51.614  [2024-12-09 16:34:20.583418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:51.614  [2024-12-09 16:34:20.583446] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:51.614  [2024-12-09 16:34:20.583459] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:51.614  [2024-12-09 16:34:20.583474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:51.614  [2024-12-09 16:34:20.583487] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:23:51.614  [2024-12-09 16:34:20.583500] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:51.614  [2024-12-09 16:34:20.583510] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:51.614  [2024-12-09 16:34:20.583525] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:51.614  [2024-12-09 16:34:20.583537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.614  [2024-12-09 16:34:20.583551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:51.614  [2024-12-09 16:34:20.583561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.350 ms
00:23:51.614  [2024-12-09 16:34:20.583573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.614  [2024-12-09 16:34:20.583651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.614  [2024-12-09 16:34:20.583678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:51.614  [2024-12-09 16:34:20.583688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.054 ms
00:23:51.614  [2024-12-09 16:34:20.583701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.614  [2024-12-09 16:34:20.583788] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:51.614  [2024-12-09 16:34:20.583804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:51.614  [2024-12-09 16:34:20.583815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:51.614  [2024-12-09 16:34:20.583827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:51.614  [2024-12-09 16:34:20.583837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:51.614  [2024-12-09 16:34:20.583851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:51.614  [2024-12-09 16:34:20.583862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:23:51.614  [2024-12-09 16:34:20.583877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:51.614  [2024-12-09 16:34:20.583887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:23:51.614  [2024-12-09 16:34:20.583899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:51.614  [2024-12-09 16:34:20.583908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:51.614  [2024-12-09 16:34:20.583920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:23:51.614  [2024-12-09 16:34:20.583944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:51.614  [2024-12-09 16:34:20.583956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:51.614  [2024-12-09 16:34:20.583966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:23:51.614  [2024-12-09 16:34:20.583978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:51.614  [2024-12-09 16:34:20.583988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:51.614  [2024-12-09 16:34:20.584000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:23:51.614  [2024-12-09 16:34:20.584018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:51.614  [2024-12-09 16:34:20.584031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:51.614  [2024-12-09 16:34:20.584040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:23:51.614  [2024-12-09 16:34:20.584052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:51.614  [2024-12-09 16:34:20.584062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:51.614  [2024-12-09 16:34:20.584077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:23:51.614  [2024-12-09 16:34:20.584086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:51.614  [2024-12-09 16:34:20.584097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:51.614  [2024-12-09 16:34:20.584106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:23:51.614  [2024-12-09 16:34:20.584118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:51.614  [2024-12-09 16:34:20.584127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:51.614  [2024-12-09 16:34:20.584140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:23:51.614  [2024-12-09 16:34:20.584149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:51.614  [2024-12-09 16:34:20.584161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:51.614  [2024-12-09 16:34:20.584170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:23:51.614  [2024-12-09 16:34:20.584182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:51.614  [2024-12-09 16:34:20.584191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:51.614  [2024-12-09 16:34:20.584203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:23:51.614  [2024-12-09 16:34:20.584211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:51.614  [2024-12-09 16:34:20.584223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:51.614  [2024-12-09 16:34:20.584232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:23:51.614  [2024-12-09 16:34:20.584246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:51.614  [2024-12-09 16:34:20.584255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:51.614  [2024-12-09 16:34:20.584268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:23:51.614  [2024-12-09 16:34:20.584278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:51.614  [2024-12-09 16:34:20.584291] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:51.614  [2024-12-09 16:34:20.584304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:51.615  [2024-12-09 16:34:20.584316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:51.615  [2024-12-09 16:34:20.584326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:51.615  [2024-12-09 16:34:20.584338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:51.615  [2024-12-09 16:34:20.584348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:51.615  [2024-12-09 16:34:20.584360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:51.615  [2024-12-09 16:34:20.584370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:51.615  [2024-12-09 16:34:20.584382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:51.615  [2024-12-09 16:34:20.584391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:51.615  [2024-12-09 16:34:20.584404] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:51.615  [2024-12-09 16:34:20.584416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:51.615  [2024-12-09 16:34:20.584435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:23:51.615  [2024-12-09 16:34:20.584446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:23:51.615  [2024-12-09 16:34:20.584459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:23:51.615  [2024-12-09 16:34:20.584469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:23:51.615  [2024-12-09 16:34:20.584482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:23:51.615  [2024-12-09 16:34:20.584493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:23:51.615  [2024-12-09 16:34:20.584506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:23:51.615  [2024-12-09 16:34:20.584517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:23:51.615  [2024-12-09 16:34:20.584536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:23:51.615  [2024-12-09 16:34:20.584546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:23:51.615  [2024-12-09 16:34:20.584559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:23:51.615  [2024-12-09 16:34:20.584569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:23:51.615  [2024-12-09 16:34:20.584582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:23:51.615  [2024-12-09 16:34:20.584593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:23:51.615  [2024-12-09 16:34:20.584605] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:51.615  [2024-12-09 16:34:20.584616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:51.615  [2024-12-09 16:34:20.584633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:51.615  [2024-12-09 16:34:20.584643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:51.615  [2024-12-09 16:34:20.584655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:51.615  [2024-12-09 16:34:20.584667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:51.615  [2024-12-09 16:34:20.584684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.615  [2024-12-09 16:34:20.584695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:51.615  [2024-12-09 16:34:20.584707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.949 ms
00:23:51.615  [2024-12-09 16:34:20.584720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.615  [2024-12-09 16:34:20.624013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.615  [2024-12-09 16:34:20.624051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:51.615  [2024-12-09 16:34:20.624082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 39.294 ms
00:23:51.615  [2024-12-09 16:34:20.624095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.615  [2024-12-09 16:34:20.624208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.615  [2024-12-09 16:34:20.624221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:51.615  [2024-12-09 16:34:20.624234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.051 ms
00:23:51.615  [2024-12-09 16:34:20.624244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.615  [2024-12-09 16:34:20.670942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.615  [2024-12-09 16:34:20.670981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:51.615  [2024-12-09 16:34:20.670999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 46.742 ms
00:23:51.615  [2024-12-09 16:34:20.671011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.615  [2024-12-09 16:34:20.671098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.615  [2024-12-09 16:34:20.671111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:51.615  [2024-12-09 16:34:20.671127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:23:51.615  [2024-12-09 16:34:20.671138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.615  [2024-12-09 16:34:20.671586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.615  [2024-12-09 16:34:20.671606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:51.615  [2024-12-09 16:34:20.671622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.421 ms
00:23:51.615  [2024-12-09 16:34:20.671632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.615  [2024-12-09 16:34:20.671757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.615  [2024-12-09 16:34:20.671771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:51.615  [2024-12-09 16:34:20.671787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.099 ms
00:23:51.615  [2024-12-09 16:34:20.671798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.615  [2024-12-09 16:34:20.694221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.615  [2024-12-09 16:34:20.694252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:51.615  [2024-12-09 16:34:20.694271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.427 ms
00:23:51.615  [2024-12-09 16:34:20.694282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.615  [2024-12-09 16:34:20.725589] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3
00:23:51.615  [2024-12-09 16:34:20.725624] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:23:51.615  [2024-12-09 16:34:20.725645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.615  [2024-12-09 16:34:20.725656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:23:51.615  [2024-12-09 16:34:20.725672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.281 ms
00:23:51.615  [2024-12-09 16:34:20.725693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.615  [2024-12-09 16:34:20.754299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.615  [2024-12-09 16:34:20.754334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:23:51.615  [2024-12-09 16:34:20.754353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.531 ms
00:23:51.615  [2024-12-09 16:34:20.754363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.615  [2024-12-09 16:34:20.772602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.615  [2024-12-09 16:34:20.772637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:23:51.615  [2024-12-09 16:34:20.772661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.128 ms
00:23:51.615  [2024-12-09 16:34:20.772671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.876  [2024-12-09 16:34:20.790256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.876  [2024-12-09 16:34:20.790288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:23:51.876  [2024-12-09 16:34:20.790306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.478 ms
00:23:51.876  [2024-12-09 16:34:20.790316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.876  [2024-12-09 16:34:20.791245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.876  [2024-12-09 16:34:20.791277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:51.876  [2024-12-09 16:34:20.791294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.762 ms
00:23:51.876  [2024-12-09 16:34:20.791305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.876  [2024-12-09 16:34:20.874029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.876  [2024-12-09 16:34:20.874089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:23:51.876  [2024-12-09 16:34:20.874125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 82.826 ms
00:23:51.876  [2024-12-09 16:34:20.874136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.876  [2024-12-09 16:34:20.883970] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:23:51.876  [2024-12-09 16:34:20.899263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.876  [2024-12-09 16:34:20.899329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:51.876  [2024-12-09 16:34:20.899347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.054 ms
00:23:51.876  [2024-12-09 16:34:20.899360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.876  [2024-12-09 16:34:20.899446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.876  [2024-12-09 16:34:20.899462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:23:51.876  [2024-12-09 16:34:20.899474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:23:51.876  [2024-12-09 16:34:20.899486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.876  [2024-12-09 16:34:20.899538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.876  [2024-12-09 16:34:20.899552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:51.876  [2024-12-09 16:34:20.899563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.032 ms
00:23:51.876  [2024-12-09 16:34:20.899578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.876  [2024-12-09 16:34:20.899602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.876  [2024-12-09 16:34:20.899616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:51.876  [2024-12-09 16:34:20.899627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:23:51.876  [2024-12-09 16:34:20.899639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.876  [2024-12-09 16:34:20.899675] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:23:51.876  [2024-12-09 16:34:20.899693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.876  [2024-12-09 16:34:20.899706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:23:51.876  [2024-12-09 16:34:20.899734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:23:51.876  [2024-12-09 16:34:20.899747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.876  [2024-12-09 16:34:20.934011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.876  [2024-12-09 16:34:20.934049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:51.876  [2024-12-09 16:34:20.934067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.282 ms
00:23:51.876  [2024-12-09 16:34:20.934077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.876  [2024-12-09 16:34:20.934186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.876  [2024-12-09 16:34:20.934199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:51.876  [2024-12-09 16:34:20.934215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:23:51.876  [2024-12-09 16:34:20.934230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.876  [2024-12-09 16:34:20.935239] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:51.876  [2024-12-09 16:34:20.939381] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.201 ms, result 0
00:23:51.876  [2024-12-09 16:34:20.940523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:51.876  Some configs were skipped because the RPC state that can call them passed over.
00:23:51.876   16:34:20 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024
00:23:52.136  [2024-12-09 16:34:21.159572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.136  [2024-12-09 16:34:21.159635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:23:52.136  [2024-12-09 16:34:21.159651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.687 ms
00:23:52.136  [2024-12-09 16:34:21.159667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.136  [2024-12-09 16:34:21.159712] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.825 ms, result 0
00:23:52.136  true
00:23:52.136   16:34:21 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024
00:23:52.396  [2024-12-09 16:34:21.359079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.396  [2024-12-09 16:34:21.359129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:23:52.396  [2024-12-09 16:34:21.359150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.288 ms
00:23:52.396  [2024-12-09 16:34:21.359161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.396  [2024-12-09 16:34:21.359210] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.417 ms, result 0
00:23:52.396  true
00:23:52.396   16:34:21 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79606
00:23:52.396   16:34:21 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79606 ']'
00:23:52.396   16:34:21 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79606
00:23:52.396    16:34:21 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname
00:23:52.396   16:34:21 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:52.396    16:34:21 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79606
00:23:52.396   16:34:21 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:52.396   16:34:21 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:52.396   16:34:21 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79606'
00:23:52.396  killing process with pid 79606
00:23:52.396   16:34:21 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79606
00:23:52.396   16:34:21 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79606
00:23:53.338  [2024-12-09 16:34:22.495401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.338  [2024-12-09 16:34:22.495460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:23:53.338  [2024-12-09 16:34:22.495475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:23:53.338  [2024-12-09 16:34:22.495487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.338  [2024-12-09 16:34:22.495511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:23:53.338  [2024-12-09 16:34:22.499654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.338  [2024-12-09 16:34:22.499684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:23:53.338  [2024-12-09 16:34:22.499701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.128 ms
00:23:53.338  [2024-12-09 16:34:22.499710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.338  [2024-12-09 16:34:22.499973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.338  [2024-12-09 16:34:22.499988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:23:53.338  [2024-12-09 16:34:22.500017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.202 ms
00:23:53.338  [2024-12-09 16:34:22.500027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.338  [2024-12-09 16:34:22.503424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.338  [2024-12-09 16:34:22.503460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:23:53.338  [2024-12-09 16:34:22.503478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.380 ms
00:23:53.338  [2024-12-09 16:34:22.503489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.338  [2024-12-09 16:34:22.508908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.338  [2024-12-09 16:34:22.508943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:23:53.338  [2024-12-09 16:34:22.508959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.389 ms
00:23:53.338  [2024-12-09 16:34:22.508969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.600  [2024-12-09 16:34:22.522945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.600  [2024-12-09 16:34:22.522987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:23:53.600  [2024-12-09 16:34:22.523005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.933 ms
00:23:53.600  [2024-12-09 16:34:22.523014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.600  [2024-12-09 16:34:22.532811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.600  [2024-12-09 16:34:22.532851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:23:53.600  [2024-12-09 16:34:22.532866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.742 ms
00:23:53.600  [2024-12-09 16:34:22.532875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.600  [2024-12-09 16:34:22.533013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.600  [2024-12-09 16:34:22.533026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:23:53.600  [2024-12-09 16:34:22.533055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.082 ms
00:23:53.600  [2024-12-09 16:34:22.533065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.600  [2024-12-09 16:34:22.548206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.600  [2024-12-09 16:34:22.548241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:23:53.600  [2024-12-09 16:34:22.548259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.120 ms
00:23:53.600  [2024-12-09 16:34:22.548268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.600  [2024-12-09 16:34:22.562913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.600  [2024-12-09 16:34:22.562948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:23:53.600  [2024-12-09 16:34:22.562989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.612 ms
00:23:53.600  [2024-12-09 16:34:22.562998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.600  [2024-12-09 16:34:22.576946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.600  [2024-12-09 16:34:22.576981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:23:53.600  [2024-12-09 16:34:22.577006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.912 ms
00:23:53.600  [2024-12-09 16:34:22.577016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.600  [2024-12-09 16:34:22.591075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.600  [2024-12-09 16:34:22.591107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:23:53.600  [2024-12-09 16:34:22.591124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.984 ms
00:23:53.600  [2024-12-09 16:34:22.591134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.600  [2024-12-09 16:34:22.591198] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:23:53.600  [2024-12-09 16:34:22.591214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.600  [2024-12-09 16:34:22.591734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.591999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:23:53.601  [2024-12-09 16:34:22.592638] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:23:53.601  [2024-12-09 16:34:22.592664] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         e0d69a0d-a582-4620-86ef-b082c6824320
00:23:53.601  [2024-12-09 16:34:22.592682] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:23:53.601  [2024-12-09 16:34:22.592697] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:23:53.601  [2024-12-09 16:34:22.592707] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:23:53.601  [2024-12-09 16:34:22.592722] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:23:53.601  [2024-12-09 16:34:22.592732] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:23:53.601  [2024-12-09 16:34:22.592747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:23:53.601  [2024-12-09 16:34:22.592757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:23:53.601  [2024-12-09 16:34:22.592771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:23:53.601  [2024-12-09 16:34:22.592781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:23:53.601  [2024-12-09 16:34:22.592796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.601  [2024-12-09 16:34:22.592807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:23:53.601  [2024-12-09 16:34:22.592825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.602 ms
00:23:53.601  [2024-12-09 16:34:22.592835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.601  [2024-12-09 16:34:22.612550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.602  [2024-12-09 16:34:22.612584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:23:53.602  [2024-12-09 16:34:22.612623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.709 ms
00:23:53.602  [2024-12-09 16:34:22.612634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.602  [2024-12-09 16:34:22.613268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:53.602  [2024-12-09 16:34:22.613295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:23:53.602  [2024-12-09 16:34:22.613318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.580 ms
00:23:53.602  [2024-12-09 16:34:22.613329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.602  [2024-12-09 16:34:22.681093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.602  [2024-12-09 16:34:22.681129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:53.602  [2024-12-09 16:34:22.681162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.602  [2024-12-09 16:34:22.681173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.602  [2024-12-09 16:34:22.681257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.602  [2024-12-09 16:34:22.681270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:53.602  [2024-12-09 16:34:22.681291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.602  [2024-12-09 16:34:22.681303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.602  [2024-12-09 16:34:22.681355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.602  [2024-12-09 16:34:22.681367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:53.602  [2024-12-09 16:34:22.681386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.602  [2024-12-09 16:34:22.681396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.602  [2024-12-09 16:34:22.681420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.602  [2024-12-09 16:34:22.681432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:53.602  [2024-12-09 16:34:22.681446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.602  [2024-12-09 16:34:22.681461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.862  [2024-12-09 16:34:22.798326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.862  [2024-12-09 16:34:22.798378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:53.862  [2024-12-09 16:34:22.798398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.862  [2024-12-09 16:34:22.798408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.862  [2024-12-09 16:34:22.894318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.862  [2024-12-09 16:34:22.894363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:53.862  [2024-12-09 16:34:22.894382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.862  [2024-12-09 16:34:22.894398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.862  [2024-12-09 16:34:22.894474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.862  [2024-12-09 16:34:22.894487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:53.862  [2024-12-09 16:34:22.894506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.862  [2024-12-09 16:34:22.894517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.862  [2024-12-09 16:34:22.894549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.862  [2024-12-09 16:34:22.894560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:53.862  [2024-12-09 16:34:22.894575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.862  [2024-12-09 16:34:22.894585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.862  [2024-12-09 16:34:22.894718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.862  [2024-12-09 16:34:22.894732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:53.862  [2024-12-09 16:34:22.894748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.862  [2024-12-09 16:34:22.894775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.862  [2024-12-09 16:34:22.894819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.862  [2024-12-09 16:34:22.894833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:23:53.862  [2024-12-09 16:34:22.894848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.862  [2024-12-09 16:34:22.894859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.862  [2024-12-09 16:34:22.894925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.862  [2024-12-09 16:34:22.894939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:53.862  [2024-12-09 16:34:22.894960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.862  [2024-12-09 16:34:22.894971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.862  [2024-12-09 16:34:22.895020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:53.862  [2024-12-09 16:34:22.895034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:53.862  [2024-12-09 16:34:22.895049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:53.862  [2024-12-09 16:34:22.895060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:53.862  [2024-12-09 16:34:22.895208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 400.424 ms, result 0
00:23:54.922   16:34:23 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data
00:23:54.922   16:34:23 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:23:54.922  [2024-12-09 16:34:23.957407] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:23:54.922  [2024-12-09 16:34:23.957520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79670 ]
00:23:55.182  [2024-12-09 16:34:24.133440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:55.182  [2024-12-09 16:34:24.247781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:55.442  [2024-12-09 16:34:24.587512] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:55.442  [2024-12-09 16:34:24.587582] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:55.704  [2024-12-09 16:34:24.748770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.748816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:55.704  [2024-12-09 16:34:24.748831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:23:55.704  [2024-12-09 16:34:24.748842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.752030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.752067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:55.704  [2024-12-09 16:34:24.752094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.174 ms
00:23:55.704  [2024-12-09 16:34:24.752105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.752197] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:55.704  [2024-12-09 16:34:24.753189] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:55.704  [2024-12-09 16:34:24.753226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.753239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:55.704  [2024-12-09 16:34:24.753250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.038 ms
00:23:55.704  [2024-12-09 16:34:24.753260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.754740] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:23:55.704  [2024-12-09 16:34:24.773139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.773176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:23:55.704  [2024-12-09 16:34:24.773189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.429 ms
00:23:55.704  [2024-12-09 16:34:24.773199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.773296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.773310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:23:55.704  [2024-12-09 16:34:24.773321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.024 ms
00:23:55.704  [2024-12-09 16:34:24.773330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.780120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.780146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:55.704  [2024-12-09 16:34:24.780157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.762 ms
00:23:55.704  [2024-12-09 16:34:24.780166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.780257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.780271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:55.704  [2024-12-09 16:34:24.780282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.056 ms
00:23:55.704  [2024-12-09 16:34:24.780291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.780321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.780332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:55.704  [2024-12-09 16:34:24.780342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:23:55.704  [2024-12-09 16:34:24.780351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.780370] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:23:55.704  [2024-12-09 16:34:24.785018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.785048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:55.704  [2024-12-09 16:34:24.785075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.659 ms
00:23:55.704  [2024-12-09 16:34:24.785085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.785153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.785167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:55.704  [2024-12-09 16:34:24.785179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:23:55.704  [2024-12-09 16:34:24.785189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.785216] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:23:55.704  [2024-12-09 16:34:24.785240] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:23:55.704  [2024-12-09 16:34:24.785273] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:23:55.704  [2024-12-09 16:34:24.785291] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:23:55.704  [2024-12-09 16:34:24.785377] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:55.704  [2024-12-09 16:34:24.785391] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:55.704  [2024-12-09 16:34:24.785404] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:55.704  [2024-12-09 16:34:24.785436] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:55.704  [2024-12-09 16:34:24.785449] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:55.704  [2024-12-09 16:34:24.785460] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:23:55.704  [2024-12-09 16:34:24.785472] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:55.704  [2024-12-09 16:34:24.785482] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:55.704  [2024-12-09 16:34:24.785492] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:55.704  [2024-12-09 16:34:24.785503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.785514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:55.704  [2024-12-09 16:34:24.785524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.290 ms
00:23:55.704  [2024-12-09 16:34:24.785534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.785609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.704  [2024-12-09 16:34:24.785625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:55.704  [2024-12-09 16:34:24.785635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.055 ms
00:23:55.704  [2024-12-09 16:34:24.785645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.704  [2024-12-09 16:34:24.785735] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:55.704  [2024-12-09 16:34:24.785751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:55.704  [2024-12-09 16:34:24.785762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:55.704  [2024-12-09 16:34:24.785773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:55.704  [2024-12-09 16:34:24.785785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:55.704  [2024-12-09 16:34:24.785794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:55.704  [2024-12-09 16:34:24.785804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:23:55.704  [2024-12-09 16:34:24.785813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:55.704  [2024-12-09 16:34:24.785824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:23:55.704  [2024-12-09 16:34:24.785833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:55.704  [2024-12-09 16:34:24.785842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:55.704  [2024-12-09 16:34:24.785865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:23:55.704  [2024-12-09 16:34:24.785875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:55.704  [2024-12-09 16:34:24.785885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:55.704  [2024-12-09 16:34:24.785894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:23:55.704  [2024-12-09 16:34:24.785903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:55.704  [2024-12-09 16:34:24.785912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:55.704  [2024-12-09 16:34:24.785922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:23:55.704  [2024-12-09 16:34:24.785947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:55.704  [2024-12-09 16:34:24.785957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:55.704  [2024-12-09 16:34:24.785967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:23:55.704  [2024-12-09 16:34:24.785976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:55.704  [2024-12-09 16:34:24.785985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:55.704  [2024-12-09 16:34:24.785995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:23:55.704  [2024-12-09 16:34:24.786004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:55.704  [2024-12-09 16:34:24.786013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:55.705  [2024-12-09 16:34:24.786022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:23:55.705  [2024-12-09 16:34:24.786031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:55.705  [2024-12-09 16:34:24.786040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:55.705  [2024-12-09 16:34:24.786049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:23:55.705  [2024-12-09 16:34:24.786058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:55.705  [2024-12-09 16:34:24.786067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:55.705  [2024-12-09 16:34:24.786077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:23:55.705  [2024-12-09 16:34:24.786086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:55.705  [2024-12-09 16:34:24.786095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:55.705  [2024-12-09 16:34:24.786104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:23:55.705  [2024-12-09 16:34:24.786113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:55.705  [2024-12-09 16:34:24.786122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:55.705  [2024-12-09 16:34:24.786132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:23:55.705  [2024-12-09 16:34:24.786141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:55.705  [2024-12-09 16:34:24.786150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:55.705  [2024-12-09 16:34:24.786160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:23:55.705  [2024-12-09 16:34:24.786168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:55.705  [2024-12-09 16:34:24.786177] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:55.705  [2024-12-09 16:34:24.786189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:55.705  [2024-12-09 16:34:24.786202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:55.705  [2024-12-09 16:34:24.786212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:55.705  [2024-12-09 16:34:24.786222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:55.705  [2024-12-09 16:34:24.786232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:55.705  [2024-12-09 16:34:24.786242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:55.705  [2024-12-09 16:34:24.786251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:55.705  [2024-12-09 16:34:24.786260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:55.705  [2024-12-09 16:34:24.786270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:55.705  [2024-12-09 16:34:24.786280] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:55.705  [2024-12-09 16:34:24.786293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:55.705  [2024-12-09 16:34:24.786305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:23:55.705  [2024-12-09 16:34:24.786316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:23:55.705  [2024-12-09 16:34:24.786326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:23:55.705  [2024-12-09 16:34:24.786336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:23:55.705  [2024-12-09 16:34:24.786347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:23:55.705  [2024-12-09 16:34:24.786357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:23:55.705  [2024-12-09 16:34:24.786367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:23:55.705  [2024-12-09 16:34:24.786378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:23:55.705  [2024-12-09 16:34:24.786387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:23:55.705  [2024-12-09 16:34:24.786398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:23:55.705  [2024-12-09 16:34:24.786409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:23:55.705  [2024-12-09 16:34:24.786419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:23:55.705  [2024-12-09 16:34:24.786430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:23:55.705  [2024-12-09 16:34:24.786440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:23:55.705  [2024-12-09 16:34:24.786450] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:55.705  [2024-12-09 16:34:24.786461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:55.705  [2024-12-09 16:34:24.786472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:55.705  [2024-12-09 16:34:24.786482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:55.705  [2024-12-09 16:34:24.786492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:55.705  [2024-12-09 16:34:24.786503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:55.705  [2024-12-09 16:34:24.786514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.705  [2024-12-09 16:34:24.786528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:55.705  [2024-12-09 16:34:24.786539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.834 ms
00:23:55.705  [2024-12-09 16:34:24.786548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.705  [2024-12-09 16:34:24.824700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.705  [2024-12-09 16:34:24.824740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:55.705  [2024-12-09 16:34:24.824754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.156 ms
00:23:55.705  [2024-12-09 16:34:24.824764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.705  [2024-12-09 16:34:24.824877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.705  [2024-12-09 16:34:24.824889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:55.705  [2024-12-09 16:34:24.824911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:23:55.705  [2024-12-09 16:34:24.824920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.705  [2024-12-09 16:34:24.879115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.705  [2024-12-09 16:34:24.879153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:55.705  [2024-12-09 16:34:24.879169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 54.244 ms
00:23:55.705  [2024-12-09 16:34:24.879179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.705  [2024-12-09 16:34:24.879263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.965  [2024-12-09 16:34:24.879275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:55.965  [2024-12-09 16:34:24.879287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:23:55.965  [2024-12-09 16:34:24.879296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.965  [2024-12-09 16:34:24.879761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.965  [2024-12-09 16:34:24.879783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:55.965  [2024-12-09 16:34:24.879802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.445 ms
00:23:55.965  [2024-12-09 16:34:24.879812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.965  [2024-12-09 16:34:24.879941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.965  [2024-12-09 16:34:24.879957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:55.965  [2024-12-09 16:34:24.879968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.106 ms
00:23:55.965  [2024-12-09 16:34:24.879978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.965  [2024-12-09 16:34:24.899080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.965  [2024-12-09 16:34:24.899114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:55.965  [2024-12-09 16:34:24.899142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.110 ms
00:23:55.965  [2024-12-09 16:34:24.899152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:24.917441] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3
00:23:55.966  [2024-12-09 16:34:24.917481] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:23:55.966  [2024-12-09 16:34:24.917496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:24.917507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:23:55.966  [2024-12-09 16:34:24.917518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.275 ms
00:23:55.966  [2024-12-09 16:34:24.917527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:24.945260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:24.945299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:23:55.966  [2024-12-09 16:34:24.945312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.700 ms
00:23:55.966  [2024-12-09 16:34:24.945323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:24.962749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:24.962797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:23:55.966  [2024-12-09 16:34:24.962810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.376 ms
00:23:55.966  [2024-12-09 16:34:24.962819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:24.979559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:24.979595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:23:55.966  [2024-12-09 16:34:24.979607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.694 ms
00:23:55.966  [2024-12-09 16:34:24.979615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:24.980398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:24.980429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:55.966  [2024-12-09 16:34:24.980442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.674 ms
00:23:55.966  [2024-12-09 16:34:24.980452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:25.062248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:25.062317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:23:55.966  [2024-12-09 16:34:25.062334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 81.898 ms
00:23:55.966  [2024-12-09 16:34:25.062345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:25.072835] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:23:55.966  [2024-12-09 16:34:25.088181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:25.088227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:55.966  [2024-12-09 16:34:25.088242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.784 ms
00:23:55.966  [2024-12-09 16:34:25.088257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:25.088360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:25.088373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:23:55.966  [2024-12-09 16:34:25.088385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:23:55.966  [2024-12-09 16:34:25.088396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:25.088445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:25.088456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:55.966  [2024-12-09 16:34:25.088467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.028 ms
00:23:55.966  [2024-12-09 16:34:25.088481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:25.088512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:25.088524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:55.966  [2024-12-09 16:34:25.088534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:23:55.966  [2024-12-09 16:34:25.088544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:25.088596] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:23:55.966  [2024-12-09 16:34:25.088608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:25.088618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:23:55.966  [2024-12-09 16:34:25.088628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:23:55.966  [2024-12-09 16:34:25.088654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:25.122943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:25.122978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:55.966  [2024-12-09 16:34:25.122991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.321 ms
00:23:55.966  [2024-12-09 16:34:25.123002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:25.123107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:55.966  [2024-12-09 16:34:25.123121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:55.966  [2024-12-09 16:34:25.123132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:23:55.966  [2024-12-09 16:34:25.123142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:55.966  [2024-12-09 16:34:25.124113] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:55.966  [2024-12-09 16:34:25.128137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 375.650 ms, result 0
00:23:55.966  [2024-12-09 16:34:25.129074] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:56.224  [2024-12-09 16:34:25.146659] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:57.160  
[2024-12-09T16:34:27.278Z] Copying: 26/256 [MB] (26 MBps)
[2024-12-09T16:34:28.217Z] Copying: 49/256 [MB] (23 MBps)
[2024-12-09T16:34:29.156Z] Copying: 72/256 [MB] (22 MBps)
[2024-12-09T16:34:30.536Z] Copying: 95/256 [MB] (22 MBps)
[2024-12-09T16:34:31.476Z] Copying: 118/256 [MB] (23 MBps)
[2024-12-09T16:34:32.415Z] Copying: 141/256 [MB] (23 MBps)
[2024-12-09T16:34:33.354Z] Copying: 164/256 [MB] (23 MBps)
[2024-12-09T16:34:34.291Z] Copying: 188/256 [MB] (23 MBps)
[2024-12-09T16:34:35.229Z] Copying: 211/256 [MB] (23 MBps)
[2024-12-09T16:34:36.169Z] Copying: 235/256 [MB] (23 MBps)
[2024-12-09T16:34:36.169Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-09 16:34:35.983932] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:24:06.990  [2024-12-09 16:34:35.998485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:06.990  [2024-12-09 16:34:35.998523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:24:06.990  [2024-12-09 16:34:35.998544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:24:06.990  [2024-12-09 16:34:35.998554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:06.990  [2024-12-09 16:34:35.998576] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:24:06.990  [2024-12-09 16:34:36.002615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:06.990  [2024-12-09 16:34:36.002642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:24:06.990  [2024-12-09 16:34:36.002653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.029 ms
00:24:06.990  [2024-12-09 16:34:36.002663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:06.990  [2024-12-09 16:34:36.002914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:06.990  [2024-12-09 16:34:36.002939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:24:06.990  [2024-12-09 16:34:36.002951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.230 ms
00:24:06.990  [2024-12-09 16:34:36.002961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:06.990  [2024-12-09 16:34:36.005833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:06.990  [2024-12-09 16:34:36.005854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:24:06.990  [2024-12-09 16:34:36.005865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.856 ms
00:24:06.990  [2024-12-09 16:34:36.005875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:06.990  [2024-12-09 16:34:36.011237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:06.990  [2024-12-09 16:34:36.011270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:24:06.990  [2024-12-09 16:34:36.011281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.352 ms
00:24:06.990  [2024-12-09 16:34:36.011291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:06.990  [2024-12-09 16:34:36.047057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:06.990  [2024-12-09 16:34:36.047095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:24:06.990  [2024-12-09 16:34:36.047108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.761 ms
00:24:06.990  [2024-12-09 16:34:36.047118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:06.990  [2024-12-09 16:34:36.067979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:06.990  [2024-12-09 16:34:36.068016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:24:06.990  [2024-12-09 16:34:36.068053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.802 ms
00:24:06.990  [2024-12-09 16:34:36.068064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:06.990  [2024-12-09 16:34:36.068201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:06.990  [2024-12-09 16:34:36.068217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:24:06.990  [2024-12-09 16:34:36.068238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.077 ms
00:24:06.990  [2024-12-09 16:34:36.068249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:06.990  [2024-12-09 16:34:36.102759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:06.990  [2024-12-09 16:34:36.102795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:24:06.990  [2024-12-09 16:34:36.102807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.547 ms
00:24:06.990  [2024-12-09 16:34:36.102817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:06.990  [2024-12-09 16:34:36.137305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:06.990  [2024-12-09 16:34:36.137340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:24:06.990  [2024-12-09 16:34:36.137352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.490 ms
00:24:06.990  [2024-12-09 16:34:36.137361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.251  [2024-12-09 16:34:36.170927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:07.251  [2024-12-09 16:34:36.170964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:24:07.251  [2024-12-09 16:34:36.170976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.562 ms
00:24:07.251  [2024-12-09 16:34:36.170985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.251  [2024-12-09 16:34:36.204739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:07.251  [2024-12-09 16:34:36.204777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:24:07.251  [2024-12-09 16:34:36.204790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.728 ms
00:24:07.251  [2024-12-09 16:34:36.204799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.251  [2024-12-09 16:34:36.204852] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:24:07.251  [2024-12-09 16:34:36.204867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.204879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.204890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.204910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.204921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.204931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.204942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.204953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.204963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.204974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.204984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.204994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.251  [2024-12-09 16:34:36.205186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:24:07.252  [2024-12-09 16:34:36.205988] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:24:07.252  [2024-12-09 16:34:36.205999] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         e0d69a0d-a582-4620-86ef-b082c6824320
00:24:07.252  [2024-12-09 16:34:36.206010] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:24:07.252  [2024-12-09 16:34:36.206019] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:24:07.252  [2024-12-09 16:34:36.206029] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:24:07.252  [2024-12-09 16:34:36.206040] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:24:07.252  [2024-12-09 16:34:36.206050] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:24:07.252  [2024-12-09 16:34:36.206060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:24:07.252  [2024-12-09 16:34:36.206076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:24:07.252  [2024-12-09 16:34:36.206085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:24:07.252  [2024-12-09 16:34:36.206094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:24:07.252  [2024-12-09 16:34:36.206104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:07.252  [2024-12-09 16:34:36.206114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:24:07.253  [2024-12-09 16:34:36.206125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.255 ms
00:24:07.253  [2024-12-09 16:34:36.206134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.253  [2024-12-09 16:34:36.224483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:07.253  [2024-12-09 16:34:36.224517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:24:07.253  [2024-12-09 16:34:36.224529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.357 ms
00:24:07.253  [2024-12-09 16:34:36.224538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.253  [2024-12-09 16:34:36.225100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:07.253  [2024-12-09 16:34:36.225121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:24:07.253  [2024-12-09 16:34:36.225132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.507 ms
00:24:07.253  [2024-12-09 16:34:36.225143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.253  [2024-12-09 16:34:36.277315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.253  [2024-12-09 16:34:36.277349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:07.253  [2024-12-09 16:34:36.277377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.253  [2024-12-09 16:34:36.277393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.253  [2024-12-09 16:34:36.277482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.253  [2024-12-09 16:34:36.277495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:07.253  [2024-12-09 16:34:36.277505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.253  [2024-12-09 16:34:36.277514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.253  [2024-12-09 16:34:36.277582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.253  [2024-12-09 16:34:36.277596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:07.253  [2024-12-09 16:34:36.277606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.253  [2024-12-09 16:34:36.277616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.253  [2024-12-09 16:34:36.277640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.253  [2024-12-09 16:34:36.277651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:07.253  [2024-12-09 16:34:36.277662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.253  [2024-12-09 16:34:36.277672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.253  [2024-12-09 16:34:36.394733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.253  [2024-12-09 16:34:36.394780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:07.253  [2024-12-09 16:34:36.394799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.253  [2024-12-09 16:34:36.394809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.513  [2024-12-09 16:34:36.490511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.513  [2024-12-09 16:34:36.490556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:07.513  [2024-12-09 16:34:36.490570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.513  [2024-12-09 16:34:36.490581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.513  [2024-12-09 16:34:36.490636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.513  [2024-12-09 16:34:36.490648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:07.513  [2024-12-09 16:34:36.490658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.513  [2024-12-09 16:34:36.490668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.513  [2024-12-09 16:34:36.490696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.513  [2024-12-09 16:34:36.490717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:07.513  [2024-12-09 16:34:36.490727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.513  [2024-12-09 16:34:36.490736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.513  [2024-12-09 16:34:36.490852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.513  [2024-12-09 16:34:36.490865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:07.513  [2024-12-09 16:34:36.490875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.513  [2024-12-09 16:34:36.490885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.513  [2024-12-09 16:34:36.490971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.513  [2024-12-09 16:34:36.490986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:24:07.513  [2024-12-09 16:34:36.491005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.513  [2024-12-09 16:34:36.491015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.513  [2024-12-09 16:34:36.491053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.513  [2024-12-09 16:34:36.491065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:07.513  [2024-12-09 16:34:36.491075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.513  [2024-12-09 16:34:36.491086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.513  [2024-12-09 16:34:36.491129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:07.513  [2024-12-09 16:34:36.491148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:07.513  [2024-12-09 16:34:36.491159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:07.513  [2024-12-09 16:34:36.491169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:07.513  [2024-12-09 16:34:36.491330] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 493.645 ms, result 0
00:24:08.451  
00:24:08.451  
00:24:08.451   16:34:37 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero
00:24:08.451   16:34:37 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data
00:24:09.021   16:34:37 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:24:09.021  [2024-12-09 16:34:38.063757] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:24:09.021  [2024-12-09 16:34:38.063888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79813 ]
00:24:09.280  [2024-12-09 16:34:38.246994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:09.280  [2024-12-09 16:34:38.359031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:24:09.850  [2024-12-09 16:34:38.716993] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:09.850  [2024-12-09 16:34:38.717072] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:09.850  [2024-12-09 16:34:38.878142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.850  [2024-12-09 16:34:38.878187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:24:09.850  [2024-12-09 16:34:38.878217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:24:09.850  [2024-12-09 16:34:38.878227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.850  [2024-12-09 16:34:38.881385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.850  [2024-12-09 16:34:38.881419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:09.850  [2024-12-09 16:34:38.881431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.143 ms
00:24:09.850  [2024-12-09 16:34:38.881457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.850  [2024-12-09 16:34:38.881552] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:24:09.850  [2024-12-09 16:34:38.882557] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:24:09.850  [2024-12-09 16:34:38.882589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.850  [2024-12-09 16:34:38.882599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:09.850  [2024-12-09 16:34:38.882611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.047 ms
00:24:09.850  [2024-12-09 16:34:38.882621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.850  [2024-12-09 16:34:38.884142] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:24:09.850  [2024-12-09 16:34:38.902478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.850  [2024-12-09 16:34:38.902514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:24:09.850  [2024-12-09 16:34:38.902527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.367 ms
00:24:09.850  [2024-12-09 16:34:38.902537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.850  [2024-12-09 16:34:38.902634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.850  [2024-12-09 16:34:38.902647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:24:09.850  [2024-12-09 16:34:38.902658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.024 ms
00:24:09.850  [2024-12-09 16:34:38.902668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.850  [2024-12-09 16:34:38.909473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.850  [2024-12-09 16:34:38.909498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:09.850  [2024-12-09 16:34:38.909509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.779 ms
00:24:09.850  [2024-12-09 16:34:38.909518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.850  [2024-12-09 16:34:38.909615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.850  [2024-12-09 16:34:38.909629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:09.850  [2024-12-09 16:34:38.909640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.060 ms
00:24:09.850  [2024-12-09 16:34:38.909649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.850  [2024-12-09 16:34:38.909679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.850  [2024-12-09 16:34:38.909689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:24:09.850  [2024-12-09 16:34:38.909699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:24:09.850  [2024-12-09 16:34:38.909708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.850  [2024-12-09 16:34:38.909729] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:24:09.850  [2024-12-09 16:34:38.914470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.850  [2024-12-09 16:34:38.914497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:09.850  [2024-12-09 16:34:38.914509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.753 ms
00:24:09.850  [2024-12-09 16:34:38.914518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.850  [2024-12-09 16:34:38.914584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.850  [2024-12-09 16:34:38.914596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:24:09.850  [2024-12-09 16:34:38.914606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:24:09.850  [2024-12-09 16:34:38.914615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.850  [2024-12-09 16:34:38.914637] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:24:09.850  [2024-12-09 16:34:38.914673] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:24:09.850  [2024-12-09 16:34:38.914711] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:24:09.850  [2024-12-09 16:34:38.914729] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:24:09.850  [2024-12-09 16:34:38.914811] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:24:09.851  [2024-12-09 16:34:38.914823] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:24:09.851  [2024-12-09 16:34:38.914835] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:24:09.851  [2024-12-09 16:34:38.914868] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:24:09.851  [2024-12-09 16:34:38.914880] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:24:09.851  [2024-12-09 16:34:38.914891] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:24:09.851  [2024-12-09 16:34:38.914917] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:24:09.851  [2024-12-09 16:34:38.914942] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:24:09.851  [2024-12-09 16:34:38.914952] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:24:09.851  [2024-12-09 16:34:38.914963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.851  [2024-12-09 16:34:38.914972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:24:09.851  [2024-12-09 16:34:38.914983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.328 ms
00:24:09.851  [2024-12-09 16:34:38.914992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.851  [2024-12-09 16:34:38.915071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.851  [2024-12-09 16:34:38.915086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:24:09.851  [2024-12-09 16:34:38.915096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.055 ms
00:24:09.851  [2024-12-09 16:34:38.915105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.851  [2024-12-09 16:34:38.915195] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:24:09.851  [2024-12-09 16:34:38.915207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:24:09.851  [2024-12-09 16:34:38.915218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:09.851  [2024-12-09 16:34:38.915228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:24:09.851  [2024-12-09 16:34:38.915248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:24:09.851  [2024-12-09 16:34:38.915267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:24:09.851  [2024-12-09 16:34:38.915277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:09.851  [2024-12-09 16:34:38.915295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:24:09.851  [2024-12-09 16:34:38.915314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:24:09.851  [2024-12-09 16:34:38.915323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:09.851  [2024-12-09 16:34:38.915333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:24:09.851  [2024-12-09 16:34:38.915343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:24:09.851  [2024-12-09 16:34:38.915352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:24:09.851  [2024-12-09 16:34:38.915371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:24:09.851  [2024-12-09 16:34:38.915380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:24:09.851  [2024-12-09 16:34:38.915398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:09.851  [2024-12-09 16:34:38.915417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:24:09.851  [2024-12-09 16:34:38.915427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:09.851  [2024-12-09 16:34:38.915445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:24:09.851  [2024-12-09 16:34:38.915454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:09.851  [2024-12-09 16:34:38.915472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:24:09.851  [2024-12-09 16:34:38.915482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:09.851  [2024-12-09 16:34:38.915499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:24:09.851  [2024-12-09 16:34:38.915509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:09.851  [2024-12-09 16:34:38.915527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:24:09.851  [2024-12-09 16:34:38.915536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:24:09.851  [2024-12-09 16:34:38.915545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:09.851  [2024-12-09 16:34:38.915554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:24:09.851  [2024-12-09 16:34:38.915563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:24:09.851  [2024-12-09 16:34:38.915572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:24:09.851  [2024-12-09 16:34:38.915591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:24:09.851  [2024-12-09 16:34:38.915600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915609] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:24:09.851  [2024-12-09 16:34:38.915618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:24:09.851  [2024-12-09 16:34:38.915632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:09.851  [2024-12-09 16:34:38.915641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:09.851  [2024-12-09 16:34:38.915651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:24:09.851  [2024-12-09 16:34:38.915661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:24:09.851  [2024-12-09 16:34:38.915670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:24:09.851  [2024-12-09 16:34:38.915680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:24:09.851  [2024-12-09 16:34:38.915689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:24:09.851  [2024-12-09 16:34:38.915698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:24:09.851  [2024-12-09 16:34:38.915708] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:24:09.851  [2024-12-09 16:34:38.915722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:09.851  [2024-12-09 16:34:38.915734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:24:09.851  [2024-12-09 16:34:38.915745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:24:09.851  [2024-12-09 16:34:38.915756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:24:09.851  [2024-12-09 16:34:38.915766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:24:09.851  [2024-12-09 16:34:38.915776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:24:09.851  [2024-12-09 16:34:38.915786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:24:09.851  [2024-12-09 16:34:38.915796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:24:09.851  [2024-12-09 16:34:38.915806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:24:09.851  [2024-12-09 16:34:38.915816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:24:09.851  [2024-12-09 16:34:38.915826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:24:09.851  [2024-12-09 16:34:38.915837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:24:09.851  [2024-12-09 16:34:38.915846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:24:09.851  [2024-12-09 16:34:38.915857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:24:09.851  [2024-12-09 16:34:38.915867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:24:09.851  [2024-12-09 16:34:38.915877] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:24:09.851  [2024-12-09 16:34:38.915888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:09.851  [2024-12-09 16:34:38.915910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:24:09.851  [2024-12-09 16:34:38.915921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:24:09.851  [2024-12-09 16:34:38.915931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:24:09.851  [2024-12-09 16:34:38.915942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:24:09.851  [2024-12-09 16:34:38.915952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.851  [2024-12-09 16:34:38.915966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:24:09.851  [2024-12-09 16:34:38.915977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.813 ms
00:24:09.851  [2024-12-09 16:34:38.915986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.851  [2024-12-09 16:34:38.954504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.851  [2024-12-09 16:34:38.954536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:09.851  [2024-12-09 16:34:38.954549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.523 ms
00:24:09.851  [2024-12-09 16:34:38.954559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.851  [2024-12-09 16:34:38.954670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.851  [2024-12-09 16:34:38.954682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:24:09.851  [2024-12-09 16:34:38.954692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:24:09.852  [2024-12-09 16:34:38.954701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.852  [2024-12-09 16:34:39.024739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.852  [2024-12-09 16:34:39.024771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:09.852  [2024-12-09 16:34:39.024787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 70.130 ms
00:24:09.852  [2024-12-09 16:34:39.024797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:09.852  [2024-12-09 16:34:39.024881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:09.852  [2024-12-09 16:34:39.024893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:09.852  [2024-12-09 16:34:39.024912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.002 ms
00:24:09.852  [2024-12-09 16:34:39.024922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.111  [2024-12-09 16:34:39.025398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.025417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:10.112  [2024-12-09 16:34:39.025434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.455 ms
00:24:10.112  [2024-12-09 16:34:39.025443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.025558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.025571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:10.112  [2024-12-09 16:34:39.025582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.092 ms
00:24:10.112  [2024-12-09 16:34:39.025592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.045559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.045589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:10.112  [2024-12-09 16:34:39.045601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.978 ms
00:24:10.112  [2024-12-09 16:34:39.045611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.063611] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3
00:24:10.112  [2024-12-09 16:34:39.063643] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:24:10.112  [2024-12-09 16:34:39.063656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.063666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:24:10.112  [2024-12-09 16:34:39.063677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.980 ms
00:24:10.112  [2024-12-09 16:34:39.063686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.091490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.091526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:24:10.112  [2024-12-09 16:34:39.091539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.770 ms
00:24:10.112  [2024-12-09 16:34:39.091548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.108977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.109017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:24:10.112  [2024-12-09 16:34:39.109029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.378 ms
00:24:10.112  [2024-12-09 16:34:39.109038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.126893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.126953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:24:10.112  [2024-12-09 16:34:39.126965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.811 ms
00:24:10.112  [2024-12-09 16:34:39.126975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.127750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.127771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:24:10.112  [2024-12-09 16:34:39.127782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.660 ms
00:24:10.112  [2024-12-09 16:34:39.127792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.211507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.211563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:24:10.112  [2024-12-09 16:34:39.211578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 83.822 ms
00:24:10.112  [2024-12-09 16:34:39.211588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.221820] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:24:10.112  [2024-12-09 16:34:39.237274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.237318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:24:10.112  [2024-12-09 16:34:39.237342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.637 ms
00:24:10.112  [2024-12-09 16:34:39.237351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.237463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.237476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:24:10.112  [2024-12-09 16:34:39.237486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:24:10.112  [2024-12-09 16:34:39.237495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.237552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.237564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:24:10.112  [2024-12-09 16:34:39.237581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.035 ms
00:24:10.112  [2024-12-09 16:34:39.237596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.237631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.237644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:24:10.112  [2024-12-09 16:34:39.237654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.015 ms
00:24:10.112  [2024-12-09 16:34:39.237663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.237719] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:24:10.112  [2024-12-09 16:34:39.237747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.237757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:24:10.112  [2024-12-09 16:34:39.237768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:24:10.112  [2024-12-09 16:34:39.237778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.272565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.272601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:24:10.112  [2024-12-09 16:34:39.272614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.816 ms
00:24:10.112  [2024-12-09 16:34:39.272624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.272728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.112  [2024-12-09 16:34:39.272740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:24:10.112  [2024-12-09 16:34:39.272751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:24:10.112  [2024-12-09 16:34:39.272771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.112  [2024-12-09 16:34:39.273856] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:24:10.112  [2024-12-09 16:34:39.278105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.013 ms, result 0
00:24:10.112  [2024-12-09 16:34:39.279004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:24:10.372  [2024-12-09 16:34:39.296809] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:24:10.372  
[2024-12-09T16:34:39.551Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-12-09 16:34:39.478109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:24:10.372  [2024-12-09 16:34:39.491804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.372  [2024-12-09 16:34:39.491843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:24:10.372  [2024-12-09 16:34:39.491856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.002 ms
00:24:10.372  [2024-12-09 16:34:39.491882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.372  [2024-12-09 16:34:39.491903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:24:10.372  [2024-12-09 16:34:39.496088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.372  [2024-12-09 16:34:39.496115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:24:10.372  [2024-12-09 16:34:39.496126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.168 ms
00:24:10.372  [2024-12-09 16:34:39.496136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.372  [2024-12-09 16:34:39.497949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.372  [2024-12-09 16:34:39.497983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:24:10.372  [2024-12-09 16:34:39.497995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.792 ms
00:24:10.372  [2024-12-09 16:34:39.498009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.372  [2024-12-09 16:34:39.501344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.372  [2024-12-09 16:34:39.501373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:24:10.372  [2024-12-09 16:34:39.501384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.323 ms
00:24:10.372  [2024-12-09 16:34:39.501409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.373  [2024-12-09 16:34:39.506863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.373  [2024-12-09 16:34:39.506906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:24:10.373  [2024-12-09 16:34:39.506918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.431 ms
00:24:10.373  [2024-12-09 16:34:39.506928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.373  [2024-12-09 16:34:39.542357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.373  [2024-12-09 16:34:39.542392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:24:10.373  [2024-12-09 16:34:39.542404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.422 ms
00:24:10.373  [2024-12-09 16:34:39.542430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.633  [2024-12-09 16:34:39.562980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.633  [2024-12-09 16:34:39.563019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:24:10.633  [2024-12-09 16:34:39.563031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.515 ms
00:24:10.633  [2024-12-09 16:34:39.563040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.633  [2024-12-09 16:34:39.563166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.633  [2024-12-09 16:34:39.563179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:24:10.633  [2024-12-09 16:34:39.563199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:24:10.633  [2024-12-09 16:34:39.563208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.633  [2024-12-09 16:34:39.597685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.633  [2024-12-09 16:34:39.597720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:24:10.633  [2024-12-09 16:34:39.597733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.516 ms
00:24:10.633  [2024-12-09 16:34:39.597742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.633  [2024-12-09 16:34:39.631882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.633  [2024-12-09 16:34:39.631918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:24:10.633  [2024-12-09 16:34:39.631930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.143 ms
00:24:10.633  [2024-12-09 16:34:39.631955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.633  [2024-12-09 16:34:39.665702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.633  [2024-12-09 16:34:39.665735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:24:10.633  [2024-12-09 16:34:39.665746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.749 ms
00:24:10.634  [2024-12-09 16:34:39.665755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.634  [2024-12-09 16:34:39.699359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.634  [2024-12-09 16:34:39.699391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:24:10.634  [2024-12-09 16:34:39.699403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.584 ms
00:24:10.634  [2024-12-09 16:34:39.699413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.634  [2024-12-09 16:34:39.699468] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:24:10.634  [2024-12-09 16:34:39.699485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.699996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.634  [2024-12-09 16:34:39.700391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:24:10.635  [2024-12-09 16:34:39.700556] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:24:10.635  [2024-12-09 16:34:39.700566] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         e0d69a0d-a582-4620-86ef-b082c6824320
00:24:10.635  [2024-12-09 16:34:39.700576] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:24:10.635  [2024-12-09 16:34:39.700586] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:24:10.635  [2024-12-09 16:34:39.700596] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:24:10.635  [2024-12-09 16:34:39.700606] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:24:10.635  [2024-12-09 16:34:39.700616] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:24:10.635  [2024-12-09 16:34:39.700631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:24:10.635  [2024-12-09 16:34:39.700640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:24:10.635  [2024-12-09 16:34:39.700649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:24:10.635  [2024-12-09 16:34:39.700658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:24:10.635  [2024-12-09 16:34:39.700668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.635  [2024-12-09 16:34:39.700678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:24:10.635  [2024-12-09 16:34:39.700688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.203 ms
00:24:10.635  [2024-12-09 16:34:39.700697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.635  [2024-12-09 16:34:39.719797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.635  [2024-12-09 16:34:39.719826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:24:10.635  [2024-12-09 16:34:39.719848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.111 ms
00:24:10.635  [2024-12-09 16:34:39.719861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.635  [2024-12-09 16:34:39.720426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:10.635  [2024-12-09 16:34:39.720442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:24:10.635  [2024-12-09 16:34:39.720451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.515 ms
00:24:10.635  [2024-12-09 16:34:39.720460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.635  [2024-12-09 16:34:39.772794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.635  [2024-12-09 16:34:39.772827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:10.635  [2024-12-09 16:34:39.772843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.635  [2024-12-09 16:34:39.772854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.635  [2024-12-09 16:34:39.772943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.635  [2024-12-09 16:34:39.772955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:10.635  [2024-12-09 16:34:39.772964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.635  [2024-12-09 16:34:39.772973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.635  [2024-12-09 16:34:39.773025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.635  [2024-12-09 16:34:39.773037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:10.635  [2024-12-09 16:34:39.773047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.635  [2024-12-09 16:34:39.773061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.635  [2024-12-09 16:34:39.773078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.635  [2024-12-09 16:34:39.773088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:10.635  [2024-12-09 16:34:39.773097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.635  [2024-12-09 16:34:39.773106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.895  [2024-12-09 16:34:39.889336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.895  [2024-12-09 16:34:39.889383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:10.895  [2024-12-09 16:34:39.889396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.895  [2024-12-09 16:34:39.889411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.895  [2024-12-09 16:34:39.985205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.895  [2024-12-09 16:34:39.985246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:10.895  [2024-12-09 16:34:39.985259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.895  [2024-12-09 16:34:39.985269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.895  [2024-12-09 16:34:39.985326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.895  [2024-12-09 16:34:39.985338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:10.895  [2024-12-09 16:34:39.985348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.895  [2024-12-09 16:34:39.985358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.895  [2024-12-09 16:34:39.985390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.895  [2024-12-09 16:34:39.985400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:10.895  [2024-12-09 16:34:39.985410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.895  [2024-12-09 16:34:39.985419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.895  [2024-12-09 16:34:39.985528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.895  [2024-12-09 16:34:39.985541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:10.895  [2024-12-09 16:34:39.985551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.895  [2024-12-09 16:34:39.985560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.895  [2024-12-09 16:34:39.985596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.895  [2024-12-09 16:34:39.985611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:24:10.895  [2024-12-09 16:34:39.985621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.895  [2024-12-09 16:34:39.985630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.895  [2024-12-09 16:34:39.985666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.895  [2024-12-09 16:34:39.985676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:10.895  [2024-12-09 16:34:39.985686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.895  [2024-12-09 16:34:39.985695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.895  [2024-12-09 16:34:39.985738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:10.895  [2024-12-09 16:34:39.985749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:10.895  [2024-12-09 16:34:39.985759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:10.895  [2024-12-09 16:34:39.985769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:10.895  [2024-12-09 16:34:39.985921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 494.886 ms, result 0
00:24:11.834  
00:24:11.834  
00:24:12.094   16:34:41 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79849
00:24:12.095   16:34:41 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init
00:24:12.095   16:34:41 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79849
00:24:12.095   16:34:41 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79849 ']'
00:24:12.095  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:24:12.095   16:34:41 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:12.095   16:34:41 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:12.095   16:34:41 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:12.095   16:34:41 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:12.095   16:34:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:24:12.095  [2024-12-09 16:34:41.128555] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:24:12.095  [2024-12-09 16:34:41.128668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79849 ]
00:24:12.355  [2024-12-09 16:34:41.307173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:12.355  [2024-12-09 16:34:41.409795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:24:13.293   16:34:42 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:13.293   16:34:42 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0
00:24:13.293   16:34:42 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config
00:24:13.293  [2024-12-09 16:34:42.467824] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:13.293  [2024-12-09 16:34:42.467886] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:13.554  [2024-12-09 16:34:42.651639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.651690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:24:13.554  [2024-12-09 16:34:42.651709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:24:13.554  [2024-12-09 16:34:42.651719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.655382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.655423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:13.554  [2024-12-09 16:34:42.655437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.648 ms
00:24:13.554  [2024-12-09 16:34:42.655447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.655570] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:24:13.554  [2024-12-09 16:34:42.656489] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:24:13.554  [2024-12-09 16:34:42.656526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.656537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:13.554  [2024-12-09 16:34:42.656551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.969 ms
00:24:13.554  [2024-12-09 16:34:42.656561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.658178] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:24:13.554  [2024-12-09 16:34:42.676495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.676539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:24:13.554  [2024-12-09 16:34:42.676552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.351 ms
00:24:13.554  [2024-12-09 16:34:42.676564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.676656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.676673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:24:13.554  [2024-12-09 16:34:42.676683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.022 ms
00:24:13.554  [2024-12-09 16:34:42.676695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.683477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.683520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:13.554  [2024-12-09 16:34:42.683532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.747 ms
00:24:13.554  [2024-12-09 16:34:42.683546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.683675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.683694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:13.554  [2024-12-09 16:34:42.683705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.094 ms
00:24:13.554  [2024-12-09 16:34:42.683725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.683749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.683765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:24:13.554  [2024-12-09 16:34:42.683776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:24:13.554  [2024-12-09 16:34:42.683790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.683814] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:24:13.554  [2024-12-09 16:34:42.688660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.688691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:13.554  [2024-12-09 16:34:42.688721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.857 ms
00:24:13.554  [2024-12-09 16:34:42.688731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.688810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.688823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:24:13.554  [2024-12-09 16:34:42.688840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:24:13.554  [2024-12-09 16:34:42.688855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.688881] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:24:13.554  [2024-12-09 16:34:42.688922] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:24:13.554  [2024-12-09 16:34:42.688972] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:24:13.554  [2024-12-09 16:34:42.688992] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:24:13.554  [2024-12-09 16:34:42.689108] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:24:13.554  [2024-12-09 16:34:42.689122] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:24:13.554  [2024-12-09 16:34:42.689146] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:24:13.554  [2024-12-09 16:34:42.689160] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:24:13.554  [2024-12-09 16:34:42.689183] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:24:13.554  [2024-12-09 16:34:42.689196] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:24:13.554  [2024-12-09 16:34:42.689211] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:24:13.554  [2024-12-09 16:34:42.689221] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:24:13.554  [2024-12-09 16:34:42.689240] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:24:13.554  [2024-12-09 16:34:42.689251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.689267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:24:13.554  [2024-12-09 16:34:42.689278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.378 ms
00:24:13.554  [2024-12-09 16:34:42.689293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.689373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.554  [2024-12-09 16:34:42.689392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:24:13.554  [2024-12-09 16:34:42.689402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.055 ms
00:24:13.554  [2024-12-09 16:34:42.689417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.554  [2024-12-09 16:34:42.689505] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:24:13.554  [2024-12-09 16:34:42.689522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:24:13.554  [2024-12-09 16:34:42.689534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:13.554  [2024-12-09 16:34:42.689549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.554  [2024-12-09 16:34:42.689560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:24:13.554  [2024-12-09 16:34:42.689576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:24:13.554  [2024-12-09 16:34:42.689587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:24:13.554  [2024-12-09 16:34:42.689605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:24:13.554  [2024-12-09 16:34:42.689615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:24:13.554  [2024-12-09 16:34:42.689629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:13.554  [2024-12-09 16:34:42.689639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:24:13.554  [2024-12-09 16:34:42.689656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:24:13.554  [2024-12-09 16:34:42.689666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:13.554  [2024-12-09 16:34:42.689680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:24:13.554  [2024-12-09 16:34:42.689689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:24:13.554  [2024-12-09 16:34:42.689703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.554  [2024-12-09 16:34:42.689713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:24:13.554  [2024-12-09 16:34:42.689727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:24:13.554  [2024-12-09 16:34:42.689747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.554  [2024-12-09 16:34:42.689762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:24:13.554  [2024-12-09 16:34:42.689772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:24:13.554  [2024-12-09 16:34:42.689785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:13.554  [2024-12-09 16:34:42.689795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:24:13.554  [2024-12-09 16:34:42.689814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:24:13.554  [2024-12-09 16:34:42.689824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:13.554  [2024-12-09 16:34:42.689838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:24:13.554  [2024-12-09 16:34:42.689847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:24:13.554  [2024-12-09 16:34:42.689861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:13.554  [2024-12-09 16:34:42.689870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:24:13.554  [2024-12-09 16:34:42.689885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:24:13.554  [2024-12-09 16:34:42.689911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:13.554  [2024-12-09 16:34:42.689926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:24:13.554  [2024-12-09 16:34:42.689936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:24:13.554  [2024-12-09 16:34:42.689950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:13.555  [2024-12-09 16:34:42.689960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:24:13.555  [2024-12-09 16:34:42.689975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:24:13.555  [2024-12-09 16:34:42.689984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:13.555  [2024-12-09 16:34:42.689998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:24:13.555  [2024-12-09 16:34:42.690007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:24:13.555  [2024-12-09 16:34:42.690025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.555  [2024-12-09 16:34:42.690036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:24:13.555  [2024-12-09 16:34:42.690050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:24:13.555  [2024-12-09 16:34:42.690059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.555  [2024-12-09 16:34:42.690074] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:24:13.555  [2024-12-09 16:34:42.690089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:24:13.555  [2024-12-09 16:34:42.690104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:13.555  [2024-12-09 16:34:42.690114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.555  [2024-12-09 16:34:42.690130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:24:13.555  [2024-12-09 16:34:42.690140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:24:13.555  [2024-12-09 16:34:42.690152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:24:13.555  [2024-12-09 16:34:42.690161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:24:13.555  [2024-12-09 16:34:42.690173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:24:13.555  [2024-12-09 16:34:42.690182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:24:13.555  [2024-12-09 16:34:42.690195] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:24:13.555  [2024-12-09 16:34:42.690208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:13.555  [2024-12-09 16:34:42.690226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:24:13.555  [2024-12-09 16:34:42.690237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:24:13.555  [2024-12-09 16:34:42.690250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:24:13.555  [2024-12-09 16:34:42.690260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:24:13.555  [2024-12-09 16:34:42.690275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:24:13.555  [2024-12-09 16:34:42.690285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:24:13.555  [2024-12-09 16:34:42.690300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:24:13.555  [2024-12-09 16:34:42.690311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:24:13.555  [2024-12-09 16:34:42.690325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:24:13.555  [2024-12-09 16:34:42.690336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:24:13.555  [2024-12-09 16:34:42.690350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:24:13.555  [2024-12-09 16:34:42.690360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:24:13.555  [2024-12-09 16:34:42.690375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:24:13.555  [2024-12-09 16:34:42.690386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:24:13.555  [2024-12-09 16:34:42.690401] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:24:13.555  [2024-12-09 16:34:42.690413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:13.555  [2024-12-09 16:34:42.690432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:24:13.555  [2024-12-09 16:34:42.690443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:24:13.555  [2024-12-09 16:34:42.690458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:24:13.555  [2024-12-09 16:34:42.690469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:24:13.555  [2024-12-09 16:34:42.690486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.555  [2024-12-09 16:34:42.690497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:24:13.555  [2024-12-09 16:34:42.690512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.032 ms
00:24:13.555  [2024-12-09 16:34:42.690527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.555  [2024-12-09 16:34:42.728123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.555  [2024-12-09 16:34:42.728159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:13.555  [2024-12-09 16:34:42.728191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 37.594 ms
00:24:13.555  [2024-12-09 16:34:42.728205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.555  [2024-12-09 16:34:42.728315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.555  [2024-12-09 16:34:42.728328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:24:13.555  [2024-12-09 16:34:42.728342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.049 ms
00:24:13.555  [2024-12-09 16:34:42.728352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.815  [2024-12-09 16:34:42.774753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.815  [2024-12-09 16:34:42.774793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:13.815  [2024-12-09 16:34:42.774811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 46.446 ms
00:24:13.815  [2024-12-09 16:34:42.774821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.815  [2024-12-09 16:34:42.774913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.815  [2024-12-09 16:34:42.774943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:13.815  [2024-12-09 16:34:42.774959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:24:13.815  [2024-12-09 16:34:42.774970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.815  [2024-12-09 16:34:42.775424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.815  [2024-12-09 16:34:42.775450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:13.815  [2024-12-09 16:34:42.775466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.427 ms
00:24:13.815  [2024-12-09 16:34:42.775476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.815  [2024-12-09 16:34:42.775599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.815  [2024-12-09 16:34:42.775614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:13.815  [2024-12-09 16:34:42.775629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.097 ms
00:24:13.815  [2024-12-09 16:34:42.775639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.815  [2024-12-09 16:34:42.797582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.815  [2024-12-09 16:34:42.797616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:13.815  [2024-12-09 16:34:42.797634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 21.948 ms
00:24:13.815  [2024-12-09 16:34:42.797645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.815  [2024-12-09 16:34:42.845773] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:24:13.815  [2024-12-09 16:34:42.845825] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:24:13.815  [2024-12-09 16:34:42.845846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.815  [2024-12-09 16:34:42.845857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:24:13.815  [2024-12-09 16:34:42.845872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 48.166 ms
00:24:13.815  [2024-12-09 16:34:42.845890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.815  [2024-12-09 16:34:42.874004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.815  [2024-12-09 16:34:42.874045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:24:13.815  [2024-12-09 16:34:42.874061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.058 ms
00:24:13.815  [2024-12-09 16:34:42.874071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.815  [2024-12-09 16:34:42.891976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.815  [2024-12-09 16:34:42.892014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:24:13.815  [2024-12-09 16:34:42.892032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.849 ms
00:24:13.815  [2024-12-09 16:34:42.892042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.815  [2024-12-09 16:34:42.909866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.815  [2024-12-09 16:34:42.909909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:24:13.815  [2024-12-09 16:34:42.909925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.775 ms
00:24:13.815  [2024-12-09 16:34:42.909935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.815  [2024-12-09 16:34:42.910676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.815  [2024-12-09 16:34:42.910708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:24:13.815  [2024-12-09 16:34:42.910723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.638 ms
00:24:13.815  [2024-12-09 16:34:42.910734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:14.075  [2024-12-09 16:34:42.994378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:14.075  [2024-12-09 16:34:42.994431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:24:14.075  [2024-12-09 16:34:42.994450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 83.748 ms
00:24:14.075  [2024-12-09 16:34:42.994460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:14.075  [2024-12-09 16:34:43.005181] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:24:14.075  [2024-12-09 16:34:43.021333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:14.075  [2024-12-09 16:34:43.021403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:24:14.075  [2024-12-09 16:34:43.021422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.741 ms
00:24:14.075  [2024-12-09 16:34:43.021435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:14.075  [2024-12-09 16:34:43.021532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:14.075  [2024-12-09 16:34:43.021549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:24:14.075  [2024-12-09 16:34:43.021561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:24:14.075  [2024-12-09 16:34:43.021573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:14.075  [2024-12-09 16:34:43.021624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:14.075  [2024-12-09 16:34:43.021638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:24:14.075  [2024-12-09 16:34:43.021649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:24:14.075  [2024-12-09 16:34:43.021664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:14.075  [2024-12-09 16:34:43.021690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:14.075  [2024-12-09 16:34:43.021703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:24:14.075  [2024-12-09 16:34:43.021715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:24:14.075  [2024-12-09 16:34:43.021727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:14.075  [2024-12-09 16:34:43.021764] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:24:14.075  [2024-12-09 16:34:43.021781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:14.075  [2024-12-09 16:34:43.021796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:24:14.075  [2024-12-09 16:34:43.021809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:24:14.075  [2024-12-09 16:34:43.021819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:14.075  [2024-12-09 16:34:43.057764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:14.075  [2024-12-09 16:34:43.057801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:24:14.075  [2024-12-09 16:34:43.057818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.970 ms
00:24:14.075  [2024-12-09 16:34:43.057829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:14.075  [2024-12-09 16:34:43.057963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:14.075  [2024-12-09 16:34:43.057979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:24:14.075  [2024-12-09 16:34:43.057993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.032 ms
00:24:14.075  [2024-12-09 16:34:43.058006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:14.075  [2024-12-09 16:34:43.058873] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:24:14.075  [2024-12-09 16:34:43.063050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.616 ms, result 0
00:24:14.075  [2024-12-09 16:34:43.064326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:24:14.075  Some configs were skipped because the RPC state that can call them passed over.
00:24:14.075   16:34:43 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024
00:24:14.334  [2024-12-09 16:34:43.307568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:14.334  [2024-12-09 16:34:43.307627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:24:14.334  [2024-12-09 16:34:43.307643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.512 ms
00:24:14.334  [2024-12-09 16:34:43.307657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:14.335  [2024-12-09 16:34:43.307693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.640 ms, result 0
00:24:14.335  true
00:24:14.335   16:34:43 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024
00:24:14.594  [2024-12-09 16:34:43.511564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:14.594  [2024-12-09 16:34:43.511612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:24:14.594  [2024-12-09 16:34:43.511633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.448 ms
00:24:14.594  [2024-12-09 16:34:43.511643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:14.594  [2024-12-09 16:34:43.511692] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.581 ms, result 0
00:24:14.594  true
00:24:14.594   16:34:43 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79849
00:24:14.594   16:34:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79849 ']'
00:24:14.594   16:34:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79849
00:24:14.594    16:34:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname
00:24:14.594   16:34:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:14.594    16:34:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79849
00:24:14.594  killing process with pid 79849
00:24:14.594   16:34:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:24:14.594   16:34:43 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:24:14.594   16:34:43 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79849'
00:24:14.594   16:34:43 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79849
00:24:14.594   16:34:43 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79849
00:24:15.534  [2024-12-09 16:34:44.628396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.534  [2024-12-09 16:34:44.628462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:24:15.534  [2024-12-09 16:34:44.628477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:24:15.534  [2024-12-09 16:34:44.628488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.534  [2024-12-09 16:34:44.628513] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:24:15.534  [2024-12-09 16:34:44.632544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.534  [2024-12-09 16:34:44.632575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:24:15.534  [2024-12-09 16:34:44.632591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.016 ms
00:24:15.534  [2024-12-09 16:34:44.632601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.534  [2024-12-09 16:34:44.632852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.534  [2024-12-09 16:34:44.632866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:24:15.534  [2024-12-09 16:34:44.632880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.189 ms
00:24:15.534  [2024-12-09 16:34:44.632907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.534  [2024-12-09 16:34:44.636194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.534  [2024-12-09 16:34:44.636230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:24:15.534  [2024-12-09 16:34:44.636248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.257 ms
00:24:15.534  [2024-12-09 16:34:44.636258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.534  [2024-12-09 16:34:44.641618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.534  [2024-12-09 16:34:44.641651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:24:15.534  [2024-12-09 16:34:44.641682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.329 ms
00:24:15.534  [2024-12-09 16:34:44.641692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.534  [2024-12-09 16:34:44.656101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.534  [2024-12-09 16:34:44.656145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:24:15.534  [2024-12-09 16:34:44.656163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.372 ms
00:24:15.534  [2024-12-09 16:34:44.656172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.534  [2024-12-09 16:34:44.666384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.534  [2024-12-09 16:34:44.666423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:24:15.534  [2024-12-09 16:34:44.666453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.160 ms
00:24:15.534  [2024-12-09 16:34:44.666463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.534  [2024-12-09 16:34:44.666605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.534  [2024-12-09 16:34:44.666619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:24:15.534  [2024-12-09 16:34:44.666632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.074 ms
00:24:15.534  [2024-12-09 16:34:44.666641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.534  [2024-12-09 16:34:44.681971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.534  [2024-12-09 16:34:44.682005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:24:15.534  [2024-12-09 16:34:44.682039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.326 ms
00:24:15.534  [2024-12-09 16:34:44.682049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.534  [2024-12-09 16:34:44.696173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.534  [2024-12-09 16:34:44.696208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:24:15.534  [2024-12-09 16:34:44.696232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.090 ms
00:24:15.534  [2024-12-09 16:34:44.696242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.534  [2024-12-09 16:34:44.710155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.534  [2024-12-09 16:34:44.710189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:24:15.534  [2024-12-09 16:34:44.710206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.881 ms
00:24:15.534  [2024-12-09 16:34:44.710215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.795  [2024-12-09 16:34:44.724092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.795  [2024-12-09 16:34:44.724124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:24:15.795  [2024-12-09 16:34:44.724141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.821 ms
00:24:15.795  [2024-12-09 16:34:44.724150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.795  [2024-12-09 16:34:44.724214] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:24:15.796  [2024-12-09 16:34:44.724231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.724977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.796  [2024-12-09 16:34:44.725367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:24:15.797  [2024-12-09 16:34:44.725636] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:24:15.797  [2024-12-09 16:34:44.725655] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         e0d69a0d-a582-4620-86ef-b082c6824320
00:24:15.797  [2024-12-09 16:34:44.725670] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:24:15.797  [2024-12-09 16:34:44.725683] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:24:15.797  [2024-12-09 16:34:44.725692] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:24:15.797  [2024-12-09 16:34:44.725705] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:24:15.797  [2024-12-09 16:34:44.725714] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:24:15.797  [2024-12-09 16:34:44.725727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:24:15.797  [2024-12-09 16:34:44.725738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:24:15.797  [2024-12-09 16:34:44.725749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:24:15.797  [2024-12-09 16:34:44.725758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:24:15.797  [2024-12-09 16:34:44.725770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.797  [2024-12-09 16:34:44.725779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:24:15.797  [2024-12-09 16:34:44.725792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.563 ms
00:24:15.797  [2024-12-09 16:34:44.725802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.797  [2024-12-09 16:34:44.744497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.797  [2024-12-09 16:34:44.744529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:24:15.797  [2024-12-09 16:34:44.744562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.697 ms
00:24:15.797  [2024-12-09 16:34:44.744572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.797  [2024-12-09 16:34:44.745235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:15.797  [2024-12-09 16:34:44.745261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:24:15.797  [2024-12-09 16:34:44.745278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.610 ms
00:24:15.797  [2024-12-09 16:34:44.745288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.797  [2024-12-09 16:34:44.810572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:15.797  [2024-12-09 16:34:44.810604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:15.797  [2024-12-09 16:34:44.810619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:15.797  [2024-12-09 16:34:44.810629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.797  [2024-12-09 16:34:44.810707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:15.797  [2024-12-09 16:34:44.810719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:15.797  [2024-12-09 16:34:44.810735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:15.797  [2024-12-09 16:34:44.810744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.797  [2024-12-09 16:34:44.810791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:15.797  [2024-12-09 16:34:44.810804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:15.797  [2024-12-09 16:34:44.810818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:15.797  [2024-12-09 16:34:44.810827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.797  [2024-12-09 16:34:44.810847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:15.797  [2024-12-09 16:34:44.810857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:15.797  [2024-12-09 16:34:44.810870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:15.797  [2024-12-09 16:34:44.810882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:15.797  [2024-12-09 16:34:44.925173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:15.797  [2024-12-09 16:34:44.925233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:15.797  [2024-12-09 16:34:44.925252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:15.797  [2024-12-09 16:34:44.925262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.058  [2024-12-09 16:34:45.020270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:16.058  [2024-12-09 16:34:45.020320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:16.058  [2024-12-09 16:34:45.020335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:16.058  [2024-12-09 16:34:45.020349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.058  [2024-12-09 16:34:45.020422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:16.058  [2024-12-09 16:34:45.020435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:16.058  [2024-12-09 16:34:45.020453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:16.058  [2024-12-09 16:34:45.020462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.058  [2024-12-09 16:34:45.020493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:16.058  [2024-12-09 16:34:45.020504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:16.058  [2024-12-09 16:34:45.020516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:16.058  [2024-12-09 16:34:45.020526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.058  [2024-12-09 16:34:45.020652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:16.058  [2024-12-09 16:34:45.020681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:16.058  [2024-12-09 16:34:45.020694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:16.058  [2024-12-09 16:34:45.020704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.058  [2024-12-09 16:34:45.020762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:16.058  [2024-12-09 16:34:45.020775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:24:16.058  [2024-12-09 16:34:45.020792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:16.058  [2024-12-09 16:34:45.020804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.058  [2024-12-09 16:34:45.020853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:16.058  [2024-12-09 16:34:45.020865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:16.058  [2024-12-09 16:34:45.020883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:16.058  [2024-12-09 16:34:45.020916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.058  [2024-12-09 16:34:45.020965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:16.058  [2024-12-09 16:34:45.020977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:16.058  [2024-12-09 16:34:45.020991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:16.058  [2024-12-09 16:34:45.021010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.058  [2024-12-09 16:34:45.021155] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 393.367 ms, result 0
00:24:16.996   16:34:45 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:24:16.996  [2024-12-09 16:34:46.090207] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:24:16.996  [2024-12-09 16:34:46.090338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79913 ]
00:24:17.255  [2024-12-09 16:34:46.275139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:17.255  [2024-12-09 16:34:46.383408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:24:17.824  [2024-12-09 16:34:46.732411] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:17.824  [2024-12-09 16:34:46.732483] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:17.824  [2024-12-09 16:34:46.893288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.824  [2024-12-09 16:34:46.893336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:24:17.824  [2024-12-09 16:34:46.893366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:24:17.824  [2024-12-09 16:34:46.893376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.824  [2024-12-09 16:34:46.896399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.824  [2024-12-09 16:34:46.896437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:17.824  [2024-12-09 16:34:46.896465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.007 ms
00:24:17.824  [2024-12-09 16:34:46.896475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.824  [2024-12-09 16:34:46.896567] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:24:17.824  [2024-12-09 16:34:46.897541] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:24:17.824  [2024-12-09 16:34:46.897578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.824  [2024-12-09 16:34:46.897589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:17.824  [2024-12-09 16:34:46.897600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.020 ms
00:24:17.824  [2024-12-09 16:34:46.897610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.824  [2024-12-09 16:34:46.899194] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:24:17.824  [2024-12-09 16:34:46.917526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.824  [2024-12-09 16:34:46.917563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:24:17.824  [2024-12-09 16:34:46.917576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.363 ms
00:24:17.824  [2024-12-09 16:34:46.917585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.824  [2024-12-09 16:34:46.917681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.824  [2024-12-09 16:34:46.917695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:24:17.824  [2024-12-09 16:34:46.917705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.024 ms
00:24:17.824  [2024-12-09 16:34:46.917715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.824  [2024-12-09 16:34:46.924467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.824  [2024-12-09 16:34:46.924494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:17.824  [2024-12-09 16:34:46.924504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.723 ms
00:24:17.824  [2024-12-09 16:34:46.924514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.824  [2024-12-09 16:34:46.924602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.824  [2024-12-09 16:34:46.924616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:17.824  [2024-12-09 16:34:46.924627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.055 ms
00:24:17.824  [2024-12-09 16:34:46.924636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.824  [2024-12-09 16:34:46.924666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.824  [2024-12-09 16:34:46.924677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:24:17.824  [2024-12-09 16:34:46.924687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:24:17.824  [2024-12-09 16:34:46.924696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.824  [2024-12-09 16:34:46.924717] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:24:17.824  [2024-12-09 16:34:46.929463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.824  [2024-12-09 16:34:46.929497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:17.824  [2024-12-09 16:34:46.929508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.757 ms
00:24:17.824  [2024-12-09 16:34:46.929518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.824  [2024-12-09 16:34:46.929585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.824  [2024-12-09 16:34:46.929599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:24:17.824  [2024-12-09 16:34:46.929609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:24:17.824  [2024-12-09 16:34:46.929620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.824  [2024-12-09 16:34:46.929642] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:24:17.824  [2024-12-09 16:34:46.929666] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:24:17.824  [2024-12-09 16:34:46.929700] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:24:17.824  [2024-12-09 16:34:46.929716] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:24:17.824  [2024-12-09 16:34:46.929802] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:24:17.824  [2024-12-09 16:34:46.929816] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:24:17.824  [2024-12-09 16:34:46.929845] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:24:17.824  [2024-12-09 16:34:46.929861] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:24:17.824  [2024-12-09 16:34:46.929873] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:24:17.824  [2024-12-09 16:34:46.929884] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:24:17.824  [2024-12-09 16:34:46.929894] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:24:17.824  [2024-12-09 16:34:46.929904] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:24:17.824  [2024-12-09 16:34:46.929928] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:24:17.824  [2024-12-09 16:34:46.929940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.824  [2024-12-09 16:34:46.929951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:24:17.824  [2024-12-09 16:34:46.929961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.300 ms
00:24:17.825  [2024-12-09 16:34:46.929971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.825  [2024-12-09 16:34:46.930047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.825  [2024-12-09 16:34:46.930063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:24:17.825  [2024-12-09 16:34:46.930073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.054 ms
00:24:17.825  [2024-12-09 16:34:46.930083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.825  [2024-12-09 16:34:46.930173] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:24:17.825  [2024-12-09 16:34:46.930187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:24:17.825  [2024-12-09 16:34:46.930197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:17.825  [2024-12-09 16:34:46.930207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:24:17.825  [2024-12-09 16:34:46.930228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:24:17.825  [2024-12-09 16:34:46.930247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:24:17.825  [2024-12-09 16:34:46.930258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:17.825  [2024-12-09 16:34:46.930278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:24:17.825  [2024-12-09 16:34:46.930298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:24:17.825  [2024-12-09 16:34:46.930308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:17.825  [2024-12-09 16:34:46.930317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:24:17.825  [2024-12-09 16:34:46.930328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:24:17.825  [2024-12-09 16:34:46.930337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:24:17.825  [2024-12-09 16:34:46.930356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:24:17.825  [2024-12-09 16:34:46.930365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:24:17.825  [2024-12-09 16:34:46.930383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:17.825  [2024-12-09 16:34:46.930402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:24:17.825  [2024-12-09 16:34:46.930411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:17.825  [2024-12-09 16:34:46.930428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:24:17.825  [2024-12-09 16:34:46.930437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:17.825  [2024-12-09 16:34:46.930455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:24:17.825  [2024-12-09 16:34:46.930464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:17.825  [2024-12-09 16:34:46.930482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:24:17.825  [2024-12-09 16:34:46.930491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:17.825  [2024-12-09 16:34:46.930508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:24:17.825  [2024-12-09 16:34:46.930517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:24:17.825  [2024-12-09 16:34:46.930526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:17.825  [2024-12-09 16:34:46.930534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:24:17.825  [2024-12-09 16:34:46.930543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:24:17.825  [2024-12-09 16:34:46.930552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:24:17.825  [2024-12-09 16:34:46.930570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:24:17.825  [2024-12-09 16:34:46.930579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930589] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:24:17.825  [2024-12-09 16:34:46.930600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:24:17.825  [2024-12-09 16:34:46.930614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:17.825  [2024-12-09 16:34:46.930623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:17.825  [2024-12-09 16:34:46.930634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:24:17.825  [2024-12-09 16:34:46.930644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:24:17.825  [2024-12-09 16:34:46.930652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:24:17.825  [2024-12-09 16:34:46.930662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:24:17.825  [2024-12-09 16:34:46.930671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:24:17.825  [2024-12-09 16:34:46.930681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:24:17.825  [2024-12-09 16:34:46.930692] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:24:17.825  [2024-12-09 16:34:46.930704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:17.825  [2024-12-09 16:34:46.930715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:24:17.825  [2024-12-09 16:34:46.930725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:24:17.825  [2024-12-09 16:34:46.930735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:24:17.825  [2024-12-09 16:34:46.930746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:24:17.825  [2024-12-09 16:34:46.930757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:24:17.825  [2024-12-09 16:34:46.930767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:24:17.825  [2024-12-09 16:34:46.930777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:24:17.825  [2024-12-09 16:34:46.930787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:24:17.825  [2024-12-09 16:34:46.930797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:24:17.825  [2024-12-09 16:34:46.930807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:24:17.825  [2024-12-09 16:34:46.930817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:24:17.825  [2024-12-09 16:34:46.930827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:24:17.825  [2024-12-09 16:34:46.930837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:24:17.825  [2024-12-09 16:34:46.930848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:24:17.825  [2024-12-09 16:34:46.930858] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:24:17.825  [2024-12-09 16:34:46.930869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:17.825  [2024-12-09 16:34:46.930880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:24:17.825  [2024-12-09 16:34:46.930890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:24:17.825  [2024-12-09 16:34:46.930912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:24:17.825  [2024-12-09 16:34:46.930923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:24:17.825  [2024-12-09 16:34:46.930936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.825  [2024-12-09 16:34:46.930951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:24:17.825  [2024-12-09 16:34:46.930961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.819 ms
00:24:17.825  [2024-12-09 16:34:46.930971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.825  [2024-12-09 16:34:46.967103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.825  [2024-12-09 16:34:46.967140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:17.825  [2024-12-09 16:34:46.967153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.132 ms
00:24:17.825  [2024-12-09 16:34:46.967163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.825  [2024-12-09 16:34:46.967275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:17.825  [2024-12-09 16:34:46.967287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:24:17.825  [2024-12-09 16:34:46.967299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:24:17.825  [2024-12-09 16:34:46.967309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.085  [2024-12-09 16:34:47.020665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.085  [2024-12-09 16:34:47.020700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:18.085  [2024-12-09 16:34:47.020717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 53.421 ms
00:24:18.085  [2024-12-09 16:34:47.020726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.085  [2024-12-09 16:34:47.020810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.085  [2024-12-09 16:34:47.020823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:18.085  [2024-12-09 16:34:47.020833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.002 ms
00:24:18.085  [2024-12-09 16:34:47.020843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.085  [2024-12-09 16:34:47.021324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.085  [2024-12-09 16:34:47.021348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:18.085  [2024-12-09 16:34:47.021366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.460 ms
00:24:18.085  [2024-12-09 16:34:47.021377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.085  [2024-12-09 16:34:47.021495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.085  [2024-12-09 16:34:47.021509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:18.085  [2024-12-09 16:34:47.021520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.093 ms
00:24:18.085  [2024-12-09 16:34:47.021531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.085  [2024-12-09 16:34:47.040825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.085  [2024-12-09 16:34:47.040858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:18.085  [2024-12-09 16:34:47.040887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.304 ms
00:24:18.085  [2024-12-09 16:34:47.040897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.085  [2024-12-09 16:34:47.058997] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:24:18.085  [2024-12-09 16:34:47.059036] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:24:18.085  [2024-12-09 16:34:47.059067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.085  [2024-12-09 16:34:47.059089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:24:18.085  [2024-12-09 16:34:47.059100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.087 ms
00:24:18.085  [2024-12-09 16:34:47.059109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.085  [2024-12-09 16:34:47.087192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.085  [2024-12-09 16:34:47.087230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:24:18.085  [2024-12-09 16:34:47.087243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.054 ms
00:24:18.085  [2024-12-09 16:34:47.087253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.085  [2024-12-09 16:34:47.104647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.085  [2024-12-09 16:34:47.104683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:24:18.085  [2024-12-09 16:34:47.104695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.346 ms
00:24:18.085  [2024-12-09 16:34:47.104704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.085  [2024-12-09 16:34:47.121915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.085  [2024-12-09 16:34:47.121952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:24:18.085  [2024-12-09 16:34:47.121979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.169 ms
00:24:18.085  [2024-12-09 16:34:47.121989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.085  [2024-12-09 16:34:47.122734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.086  [2024-12-09 16:34:47.122768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:24:18.086  [2024-12-09 16:34:47.122780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.634 ms
00:24:18.086  [2024-12-09 16:34:47.122791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.086  [2024-12-09 16:34:47.203785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.086  [2024-12-09 16:34:47.203845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:24:18.086  [2024-12-09 16:34:47.203861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 81.096 ms
00:24:18.086  [2024-12-09 16:34:47.203872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.086  [2024-12-09 16:34:47.214080] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:24:18.086  [2024-12-09 16:34:47.229198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.086  [2024-12-09 16:34:47.229235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:24:18.086  [2024-12-09 16:34:47.229250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.252 ms
00:24:18.086  [2024-12-09 16:34:47.229265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.086  [2024-12-09 16:34:47.229364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.086  [2024-12-09 16:34:47.229377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:24:18.086  [2024-12-09 16:34:47.229389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:24:18.086  [2024-12-09 16:34:47.229399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.086  [2024-12-09 16:34:47.229451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.086  [2024-12-09 16:34:47.229462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:24:18.086  [2024-12-09 16:34:47.229472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:24:18.086  [2024-12-09 16:34:47.229486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.086  [2024-12-09 16:34:47.229518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.086  [2024-12-09 16:34:47.229531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:24:18.086  [2024-12-09 16:34:47.229541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:24:18.086  [2024-12-09 16:34:47.229550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.086  [2024-12-09 16:34:47.229584] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:24:18.086  [2024-12-09 16:34:47.229614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.086  [2024-12-09 16:34:47.229624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:24:18.086  [2024-12-09 16:34:47.229634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:24:18.086  [2024-12-09 16:34:47.229660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.345  [2024-12-09 16:34:47.263791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.345  [2024-12-09 16:34:47.263832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:24:18.345  [2024-12-09 16:34:47.263846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.163 ms
00:24:18.345  [2024-12-09 16:34:47.263856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.345  [2024-12-09 16:34:47.263967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:18.345  [2024-12-09 16:34:47.263980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:24:18.345  [2024-12-09 16:34:47.263992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:24:18.345  [2024-12-09 16:34:47.264002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:18.345  [2024-12-09 16:34:47.264948] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:24:18.345  [2024-12-09 16:34:47.268970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 371.965 ms, result 0
00:24:18.345  [2024-12-09 16:34:47.269812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:24:18.345  [2024-12-09 16:34:47.287323] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:24:19.280  
[2024-12-09T16:34:49.397Z] Copying: 27/256 [MB] (27 MBps)
[2024-12-09T16:34:50.774Z] Copying: 51/256 [MB] (24 MBps)
[2024-12-09T16:34:51.712Z] Copying: 76/256 [MB] (24 MBps)
[2024-12-09T16:34:52.651Z] Copying: 101/256 [MB] (24 MBps)
[2024-12-09T16:34:53.667Z] Copying: 125/256 [MB] (24 MBps)
[2024-12-09T16:34:54.604Z] Copying: 149/256 [MB] (24 MBps)
[2024-12-09T16:34:55.542Z] Copying: 173/256 [MB] (23 MBps)
[2024-12-09T16:34:56.479Z] Copying: 197/256 [MB] (24 MBps)
[2024-12-09T16:34:57.418Z] Copying: 221/256 [MB] (24 MBps)
[2024-12-09T16:34:57.987Z] Copying: 245/256 [MB] (23 MBps)
[2024-12-09T16:34:57.987Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-09 16:34:57.981339] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:24:29.068  [2024-12-09 16:34:58.003849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.068  [2024-12-09 16:34:58.003921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:24:29.068  [2024-12-09 16:34:58.003950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:24:29.069  [2024-12-09 16:34:58.003962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.003994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:24:29.069  [2024-12-09 16:34:58.008421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.069  [2024-12-09 16:34:58.008460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:24:29.069  [2024-12-09 16:34:58.008474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.413 ms
00:24:29.069  [2024-12-09 16:34:58.008484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.008747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.069  [2024-12-09 16:34:58.008764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:24:29.069  [2024-12-09 16:34:58.008775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.231 ms
00:24:29.069  [2024-12-09 16:34:58.008786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.011936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.069  [2024-12-09 16:34:58.011962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:24:29.069  [2024-12-09 16:34:58.011976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.132 ms
00:24:29.069  [2024-12-09 16:34:58.011987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.017760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.069  [2024-12-09 16:34:58.017814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:24:29.069  [2024-12-09 16:34:58.017842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.757 ms
00:24:29.069  [2024-12-09 16:34:58.017853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.053485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.069  [2024-12-09 16:34:58.053530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:24:29.069  [2024-12-09 16:34:58.053544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.584 ms
00:24:29.069  [2024-12-09 16:34:58.053554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.074227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.069  [2024-12-09 16:34:58.074271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:24:29.069  [2024-12-09 16:34:58.074291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.659 ms
00:24:29.069  [2024-12-09 16:34:58.074301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.074442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.069  [2024-12-09 16:34:58.074457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:24:29.069  [2024-12-09 16:34:58.074479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.076 ms
00:24:29.069  [2024-12-09 16:34:58.074488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.109025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.069  [2024-12-09 16:34:58.109062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:24:29.069  [2024-12-09 16:34:58.109090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.575 ms
00:24:29.069  [2024-12-09 16:34:58.109100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.144064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.069  [2024-12-09 16:34:58.144102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:24:29.069  [2024-12-09 16:34:58.144114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.980 ms
00:24:29.069  [2024-12-09 16:34:58.144124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.179460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.069  [2024-12-09 16:34:58.179494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:24:29.069  [2024-12-09 16:34:58.179505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.353 ms
00:24:29.069  [2024-12-09 16:34:58.179515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.213909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.069  [2024-12-09 16:34:58.213942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:24:29.069  [2024-12-09 16:34:58.213955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.387 ms
00:24:29.069  [2024-12-09 16:34:58.213964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.069  [2024-12-09 16:34:58.214002] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:24:29.069  [2024-12-09 16:34:58.214018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.069  [2024-12-09 16:34:58.214569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.214994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.215004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.215030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.215041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.215051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.215062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.215073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.215083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.215094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:24:29.070  [2024-12-09 16:34:58.215112] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:24:29.070  [2024-12-09 16:34:58.215125] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         e0d69a0d-a582-4620-86ef-b082c6824320
00:24:29.070  [2024-12-09 16:34:58.215135] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:24:29.070  [2024-12-09 16:34:58.215145] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:24:29.070  [2024-12-09 16:34:58.215154] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:24:29.070  [2024-12-09 16:34:58.215164] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:24:29.070  [2024-12-09 16:34:58.215173] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:24:29.070  [2024-12-09 16:34:58.215183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:24:29.070  [2024-12-09 16:34:58.215197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:24:29.070  [2024-12-09 16:34:58.215206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:24:29.070  [2024-12-09 16:34:58.215215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:24:29.070  [2024-12-09 16:34:58.215227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.070  [2024-12-09 16:34:58.215240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:24:29.070  [2024-12-09 16:34:58.215251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.227 ms
00:24:29.070  [2024-12-09 16:34:58.215260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.070  [2024-12-09 16:34:58.234406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.070  [2024-12-09 16:34:58.234436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:24:29.070  [2024-12-09 16:34:58.234448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.158 ms
00:24:29.070  [2024-12-09 16:34:58.234457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.070  [2024-12-09 16:34:58.235028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.070  [2024-12-09 16:34:58.235054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:24:29.070  [2024-12-09 16:34:58.235067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.530 ms
00:24:29.070  [2024-12-09 16:34:58.235076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.288820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.288853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:29.330  [2024-12-09 16:34:58.288866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.288880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.288965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.288993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:29.330  [2024-12-09 16:34:58.289013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.289023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.289069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.289087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:29.330  [2024-12-09 16:34:58.289097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.289108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.289131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.289141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:29.330  [2024-12-09 16:34:58.289161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.289171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.405151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.405205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:29.330  [2024-12-09 16:34:58.405219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.405229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.500099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.500147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:29.330  [2024-12-09 16:34:58.500161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.500171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.500235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.500245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:29.330  [2024-12-09 16:34:58.500255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.500266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.500293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.500309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:29.330  [2024-12-09 16:34:58.500320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.500329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.500437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.500451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:29.330  [2024-12-09 16:34:58.500461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.500470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.500504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.500532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:24:29.330  [2024-12-09 16:34:58.500547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.500557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.500596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.500607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:29.330  [2024-12-09 16:34:58.500617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.500627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.500669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:29.330  [2024-12-09 16:34:58.500685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:29.330  [2024-12-09 16:34:58.500695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:29.330  [2024-12-09 16:34:58.500704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.330  [2024-12-09 16:34:58.500841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.811 ms, result 0
00:24:30.711  
00:24:30.711  
00:24:30.711   16:34:59 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:24:30.970  /home/vagrant/spdk_repo/spdk/test/ftl/data: OK
00:24:30.970   16:34:59 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT
00:24:30.970   16:34:59 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill
00:24:30.970   16:34:59 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:24:30.970   16:34:59 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:24:30.970   16:34:59 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern
00:24:30.970   16:35:00 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data
00:24:30.970   16:35:00 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79849
00:24:30.970   16:35:00 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79849 ']'
00:24:30.970   16:35:00 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79849
00:24:30.970  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79849) - No such process
00:24:30.970  Process with pid 79849 is not found
00:24:30.970   16:35:00 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79849 is not found'
00:24:30.970  
00:24:30.970  real	1m12.103s
00:24:30.970  user	1m38.699s
00:24:30.970  sys	0m6.682s
00:24:30.970  ************************************
00:24:30.970  END TEST ftl_trim
00:24:30.970  ************************************
00:24:30.970   16:35:00 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable
00:24:30.970   16:35:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:24:31.230   16:35:00 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0
00:24:31.230   16:35:00 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:24:31.230   16:35:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:24:31.230   16:35:00 ftl -- common/autotest_common.sh@10 -- # set +x
00:24:31.230  ************************************
00:24:31.230  START TEST ftl_restore
00:24:31.230  ************************************
00:24:31.230   16:35:00 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0
00:24:31.230  * Looking for test storage...
00:24:31.230  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:24:31.230    16:35:00 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:24:31.230     16:35:00 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version
00:24:31.230     16:35:00 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:24:31.230    16:35:00 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-:
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-:
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<'
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 ))
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:31.230     16:35:00 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1
00:24:31.230     16:35:00 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1
00:24:31.230     16:35:00 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:31.230     16:35:00 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1
00:24:31.230     16:35:00 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2
00:24:31.230     16:35:00 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2
00:24:31.230     16:35:00 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:31.230     16:35:00 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:24:31.230    16:35:00 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0
00:24:31.230    16:35:00 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:31.231    16:35:00 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:24:31.231  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:31.231  		--rc genhtml_branch_coverage=1
00:24:31.231  		--rc genhtml_function_coverage=1
00:24:31.231  		--rc genhtml_legend=1
00:24:31.231  		--rc geninfo_all_blocks=1
00:24:31.231  		--rc geninfo_unexecuted_blocks=1
00:24:31.231  		
00:24:31.231  		'
00:24:31.231    16:35:00 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:24:31.231  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:31.231  		--rc genhtml_branch_coverage=1
00:24:31.231  		--rc genhtml_function_coverage=1
00:24:31.231  		--rc genhtml_legend=1
00:24:31.231  		--rc geninfo_all_blocks=1
00:24:31.231  		--rc geninfo_unexecuted_blocks=1
00:24:31.231  		
00:24:31.231  		'
00:24:31.231    16:35:00 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:24:31.231  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:31.231  		--rc genhtml_branch_coverage=1
00:24:31.231  		--rc genhtml_function_coverage=1
00:24:31.231  		--rc genhtml_legend=1
00:24:31.231  		--rc geninfo_all_blocks=1
00:24:31.231  		--rc geninfo_unexecuted_blocks=1
00:24:31.231  		
00:24:31.231  		'
00:24:31.231    16:35:00 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:24:31.231  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:31.231  		--rc genhtml_branch_coverage=1
00:24:31.231  		--rc genhtml_function_coverage=1
00:24:31.231  		--rc genhtml_legend=1
00:24:31.231  		--rc geninfo_all_blocks=1
00:24:31.231  		--rc geninfo_unexecuted_blocks=1
00:24:31.231  		
00:24:31.231  		'
00:24:31.231   16:35:00 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:24:31.231      16:35:00 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh
00:24:31.491     16:35:00 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:24:31.491     16:35:00 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid=
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:24:31.491    16:35:00 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.x0ogcxG0Bv
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80125
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:24:31.491   16:35:00 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80125
00:24:31.491   16:35:00 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 80125 ']'
00:24:31.491   16:35:00 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:31.491   16:35:00 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:31.491  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:24:31.491   16:35:00 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:31.491   16:35:00 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:31.491   16:35:00 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x
00:24:31.491  [2024-12-09 16:35:00.550205] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:24:31.491  [2024-12-09 16:35:00.550334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80125 ]
00:24:31.750  [2024-12-09 16:35:00.733324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:31.750  [2024-12-09 16:35:00.841735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:24:32.689   16:35:01 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:32.689   16:35:01 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0
00:24:32.689    16:35:01 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:24:32.689    16:35:01 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0
00:24:32.689    16:35:01 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:24:32.689    16:35:01 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424
00:24:32.689    16:35:01 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev
00:24:32.689     16:35:01 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:24:32.948    16:35:01 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:24:32.948    16:35:01 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size
00:24:32.948     16:35:01 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:24:32.948     16:35:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:24:32.948     16:35:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:24:32.948     16:35:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:24:32.948     16:35:01 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:24:32.948      16:35:01 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:24:33.208     16:35:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:24:33.208    {
00:24:33.208      "name": "nvme0n1",
00:24:33.208      "aliases": [
00:24:33.208        "421b41d8-533d-484a-acf6-16fd905e421d"
00:24:33.208      ],
00:24:33.208      "product_name": "NVMe disk",
00:24:33.208      "block_size": 4096,
00:24:33.208      "num_blocks": 1310720,
00:24:33.208      "uuid": "421b41d8-533d-484a-acf6-16fd905e421d",
00:24:33.208      "numa_id": -1,
00:24:33.208      "assigned_rate_limits": {
00:24:33.208        "rw_ios_per_sec": 0,
00:24:33.208        "rw_mbytes_per_sec": 0,
00:24:33.208        "r_mbytes_per_sec": 0,
00:24:33.208        "w_mbytes_per_sec": 0
00:24:33.208      },
00:24:33.208      "claimed": true,
00:24:33.208      "claim_type": "read_many_write_one",
00:24:33.208      "zoned": false,
00:24:33.208      "supported_io_types": {
00:24:33.208        "read": true,
00:24:33.208        "write": true,
00:24:33.208        "unmap": true,
00:24:33.209        "flush": true,
00:24:33.209        "reset": true,
00:24:33.209        "nvme_admin": true,
00:24:33.209        "nvme_io": true,
00:24:33.209        "nvme_io_md": false,
00:24:33.209        "write_zeroes": true,
00:24:33.209        "zcopy": false,
00:24:33.209        "get_zone_info": false,
00:24:33.209        "zone_management": false,
00:24:33.209        "zone_append": false,
00:24:33.209        "compare": true,
00:24:33.209        "compare_and_write": false,
00:24:33.209        "abort": true,
00:24:33.209        "seek_hole": false,
00:24:33.209        "seek_data": false,
00:24:33.209        "copy": true,
00:24:33.209        "nvme_iov_md": false
00:24:33.209      },
00:24:33.209      "driver_specific": {
00:24:33.209        "nvme": [
00:24:33.209          {
00:24:33.209            "pci_address": "0000:00:11.0",
00:24:33.209            "trid": {
00:24:33.209              "trtype": "PCIe",
00:24:33.209              "traddr": "0000:00:11.0"
00:24:33.209            },
00:24:33.209            "ctrlr_data": {
00:24:33.209              "cntlid": 0,
00:24:33.209              "vendor_id": "0x1b36",
00:24:33.209              "model_number": "QEMU NVMe Ctrl",
00:24:33.209              "serial_number": "12341",
00:24:33.209              "firmware_revision": "8.0.0",
00:24:33.209              "subnqn": "nqn.2019-08.org.qemu:12341",
00:24:33.209              "oacs": {
00:24:33.209                "security": 0,
00:24:33.209                "format": 1,
00:24:33.209                "firmware": 0,
00:24:33.209                "ns_manage": 1
00:24:33.209              },
00:24:33.209              "multi_ctrlr": false,
00:24:33.209              "ana_reporting": false
00:24:33.209            },
00:24:33.209            "vs": {
00:24:33.209              "nvme_version": "1.4"
00:24:33.209            },
00:24:33.209            "ns_data": {
00:24:33.209              "id": 1,
00:24:33.209              "can_share": false
00:24:33.209            }
00:24:33.209          }
00:24:33.209        ],
00:24:33.209        "mp_policy": "active_passive"
00:24:33.209      }
00:24:33.209    }
00:24:33.209  ]'
00:24:33.209      16:35:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:24:33.209     16:35:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:24:33.209      16:35:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:24:33.209     16:35:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720
00:24:33.209     16:35:02 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:24:33.209     16:35:02 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120
00:24:33.209    16:35:02 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120
00:24:33.209    16:35:02 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:24:33.209    16:35:02 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols
00:24:33.209     16:35:02 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:24:33.209     16:35:02 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:24:33.469    16:35:02 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=48feb82e-8956-4a4c-a210-ee79f0fca43a
00:24:33.469    16:35:02 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores
00:24:33.469    16:35:02 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 48feb82e-8956-4a4c-a210-ee79f0fca43a
00:24:33.728     16:35:02 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:24:33.987    16:35:02 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=9678bee3-58ec-4718-8c5c-1d25d8f4eda6
00:24:33.987    16:35:02 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9678bee3-58ec-4718-8c5c-1d25d8f4eda6
00:24:33.987   16:35:03 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=abbcc856-9203-4599-900a-44b5598b39a1
00:24:33.987   16:35:03 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']'
00:24:33.987    16:35:03 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 abbcc856-9203-4599-900a-44b5598b39a1
00:24:33.987    16:35:03 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0
00:24:33.987    16:35:03 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:24:33.987    16:35:03 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=abbcc856-9203-4599-900a-44b5598b39a1
00:24:33.987    16:35:03 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size=
00:24:33.987     16:35:03 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size abbcc856-9203-4599-900a-44b5598b39a1
00:24:33.987     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=abbcc856-9203-4599-900a-44b5598b39a1
00:24:33.987     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:24:33.987     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:24:33.987     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:24:33.987      16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b abbcc856-9203-4599-900a-44b5598b39a1
00:24:34.247     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:24:34.247    {
00:24:34.247      "name": "abbcc856-9203-4599-900a-44b5598b39a1",
00:24:34.247      "aliases": [
00:24:34.247        "lvs/nvme0n1p0"
00:24:34.247      ],
00:24:34.247      "product_name": "Logical Volume",
00:24:34.247      "block_size": 4096,
00:24:34.247      "num_blocks": 26476544,
00:24:34.247      "uuid": "abbcc856-9203-4599-900a-44b5598b39a1",
00:24:34.247      "assigned_rate_limits": {
00:24:34.247        "rw_ios_per_sec": 0,
00:24:34.247        "rw_mbytes_per_sec": 0,
00:24:34.247        "r_mbytes_per_sec": 0,
00:24:34.247        "w_mbytes_per_sec": 0
00:24:34.247      },
00:24:34.247      "claimed": false,
00:24:34.247      "zoned": false,
00:24:34.247      "supported_io_types": {
00:24:34.247        "read": true,
00:24:34.247        "write": true,
00:24:34.247        "unmap": true,
00:24:34.247        "flush": false,
00:24:34.247        "reset": true,
00:24:34.247        "nvme_admin": false,
00:24:34.247        "nvme_io": false,
00:24:34.247        "nvme_io_md": false,
00:24:34.247        "write_zeroes": true,
00:24:34.247        "zcopy": false,
00:24:34.247        "get_zone_info": false,
00:24:34.247        "zone_management": false,
00:24:34.247        "zone_append": false,
00:24:34.247        "compare": false,
00:24:34.247        "compare_and_write": false,
00:24:34.247        "abort": false,
00:24:34.247        "seek_hole": true,
00:24:34.247        "seek_data": true,
00:24:34.247        "copy": false,
00:24:34.247        "nvme_iov_md": false
00:24:34.247      },
00:24:34.247      "driver_specific": {
00:24:34.247        "lvol": {
00:24:34.247          "lvol_store_uuid": "9678bee3-58ec-4718-8c5c-1d25d8f4eda6",
00:24:34.247          "base_bdev": "nvme0n1",
00:24:34.247          "thin_provision": true,
00:24:34.247          "num_allocated_clusters": 0,
00:24:34.247          "snapshot": false,
00:24:34.247          "clone": false,
00:24:34.247          "esnap_clone": false
00:24:34.247        }
00:24:34.247      }
00:24:34.247    }
00:24:34.247  ]'
00:24:34.247      16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:24:34.247     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:24:34.247      16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:24:34.247     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544
00:24:34.247     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:24:34.247     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424
00:24:34.247    16:35:03 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171
00:24:34.247    16:35:03 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev
00:24:34.247     16:35:03 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:24:34.507    16:35:03 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:24:34.507    16:35:03 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]]
00:24:34.507     16:35:03 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size abbcc856-9203-4599-900a-44b5598b39a1
00:24:34.507     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=abbcc856-9203-4599-900a-44b5598b39a1
00:24:34.507     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:24:34.507     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:24:34.507     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:24:34.507      16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b abbcc856-9203-4599-900a-44b5598b39a1
00:24:34.766     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:24:34.766    {
00:24:34.766      "name": "abbcc856-9203-4599-900a-44b5598b39a1",
00:24:34.766      "aliases": [
00:24:34.766        "lvs/nvme0n1p0"
00:24:34.766      ],
00:24:34.766      "product_name": "Logical Volume",
00:24:34.766      "block_size": 4096,
00:24:34.766      "num_blocks": 26476544,
00:24:34.766      "uuid": "abbcc856-9203-4599-900a-44b5598b39a1",
00:24:34.766      "assigned_rate_limits": {
00:24:34.766        "rw_ios_per_sec": 0,
00:24:34.766        "rw_mbytes_per_sec": 0,
00:24:34.766        "r_mbytes_per_sec": 0,
00:24:34.766        "w_mbytes_per_sec": 0
00:24:34.766      },
00:24:34.766      "claimed": false,
00:24:34.766      "zoned": false,
00:24:34.766      "supported_io_types": {
00:24:34.766        "read": true,
00:24:34.766        "write": true,
00:24:34.766        "unmap": true,
00:24:34.766        "flush": false,
00:24:34.766        "reset": true,
00:24:34.766        "nvme_admin": false,
00:24:34.766        "nvme_io": false,
00:24:34.766        "nvme_io_md": false,
00:24:34.766        "write_zeroes": true,
00:24:34.766        "zcopy": false,
00:24:34.766        "get_zone_info": false,
00:24:34.766        "zone_management": false,
00:24:34.766        "zone_append": false,
00:24:34.766        "compare": false,
00:24:34.766        "compare_and_write": false,
00:24:34.766        "abort": false,
00:24:34.766        "seek_hole": true,
00:24:34.766        "seek_data": true,
00:24:34.766        "copy": false,
00:24:34.766        "nvme_iov_md": false
00:24:34.766      },
00:24:34.766      "driver_specific": {
00:24:34.766        "lvol": {
00:24:34.766          "lvol_store_uuid": "9678bee3-58ec-4718-8c5c-1d25d8f4eda6",
00:24:34.766          "base_bdev": "nvme0n1",
00:24:34.766          "thin_provision": true,
00:24:34.766          "num_allocated_clusters": 0,
00:24:34.766          "snapshot": false,
00:24:34.766          "clone": false,
00:24:34.766          "esnap_clone": false
00:24:34.766        }
00:24:34.766      }
00:24:34.766    }
00:24:34.766  ]'
00:24:34.766      16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:24:34.766     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:24:34.766      16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:24:35.026     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544
00:24:35.026     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:24:35.026     16:35:03 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424
00:24:35.026    16:35:03 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171
00:24:35.026    16:35:03 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:24:35.026   16:35:04 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0
00:24:35.026    16:35:04 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size abbcc856-9203-4599-900a-44b5598b39a1
00:24:35.026    16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=abbcc856-9203-4599-900a-44b5598b39a1
00:24:35.026    16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:24:35.026    16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:24:35.026    16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:24:35.026     16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b abbcc856-9203-4599-900a-44b5598b39a1
00:24:35.285    16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:24:35.285    {
00:24:35.285      "name": "abbcc856-9203-4599-900a-44b5598b39a1",
00:24:35.285      "aliases": [
00:24:35.285        "lvs/nvme0n1p0"
00:24:35.285      ],
00:24:35.285      "product_name": "Logical Volume",
00:24:35.285      "block_size": 4096,
00:24:35.285      "num_blocks": 26476544,
00:24:35.285      "uuid": "abbcc856-9203-4599-900a-44b5598b39a1",
00:24:35.285      "assigned_rate_limits": {
00:24:35.285        "rw_ios_per_sec": 0,
00:24:35.285        "rw_mbytes_per_sec": 0,
00:24:35.285        "r_mbytes_per_sec": 0,
00:24:35.285        "w_mbytes_per_sec": 0
00:24:35.285      },
00:24:35.285      "claimed": false,
00:24:35.285      "zoned": false,
00:24:35.285      "supported_io_types": {
00:24:35.285        "read": true,
00:24:35.285        "write": true,
00:24:35.285        "unmap": true,
00:24:35.285        "flush": false,
00:24:35.285        "reset": true,
00:24:35.285        "nvme_admin": false,
00:24:35.285        "nvme_io": false,
00:24:35.285        "nvme_io_md": false,
00:24:35.285        "write_zeroes": true,
00:24:35.285        "zcopy": false,
00:24:35.285        "get_zone_info": false,
00:24:35.285        "zone_management": false,
00:24:35.285        "zone_append": false,
00:24:35.285        "compare": false,
00:24:35.285        "compare_and_write": false,
00:24:35.285        "abort": false,
00:24:35.285        "seek_hole": true,
00:24:35.285        "seek_data": true,
00:24:35.285        "copy": false,
00:24:35.285        "nvme_iov_md": false
00:24:35.285      },
00:24:35.285      "driver_specific": {
00:24:35.285        "lvol": {
00:24:35.285          "lvol_store_uuid": "9678bee3-58ec-4718-8c5c-1d25d8f4eda6",
00:24:35.285          "base_bdev": "nvme0n1",
00:24:35.285          "thin_provision": true,
00:24:35.285          "num_allocated_clusters": 0,
00:24:35.285          "snapshot": false,
00:24:35.285          "clone": false,
00:24:35.285          "esnap_clone": false
00:24:35.285        }
00:24:35.285      }
00:24:35.285    }
00:24:35.285  ]'
00:24:35.285     16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:24:35.285    16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:24:35.285     16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:24:35.285    16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544
00:24:35.285    16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:24:35.285    16:35:04 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424
00:24:35.285   16:35:04 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10
00:24:35.285   16:35:04 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d abbcc856-9203-4599-900a-44b5598b39a1 --l2p_dram_limit 10'
00:24:35.285   16:35:04 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']'
00:24:35.285   16:35:04 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']'
00:24:35.285   16:35:04 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0'
00:24:35.285   16:35:04 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']'
00:24:35.285  /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected
00:24:35.285   16:35:04 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d abbcc856-9203-4599-900a-44b5598b39a1 --l2p_dram_limit 10 -c nvc0n1p0
00:24:35.546  [2024-12-09 16:35:04.661267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.546  [2024-12-09 16:35:04.661319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:24:35.546  [2024-12-09 16:35:04.661338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:24:35.546  [2024-12-09 16:35:04.661348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.546  [2024-12-09 16:35:04.661407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.546  [2024-12-09 16:35:04.661419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:35.546  [2024-12-09 16:35:04.661432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.039 ms
00:24:35.546  [2024-12-09 16:35:04.661443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.546  [2024-12-09 16:35:04.661472] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:24:35.546  [2024-12-09 16:35:04.662493] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:24:35.546  [2024-12-09 16:35:04.662531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.546  [2024-12-09 16:35:04.662543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:35.546  [2024-12-09 16:35:04.662558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.068 ms
00:24:35.546  [2024-12-09 16:35:04.662568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.546  [2024-12-09 16:35:04.662644] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3fd7dc7f-5371-4379-8893-54820b2eff53
00:24:35.546  [2024-12-09 16:35:04.664050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.546  [2024-12-09 16:35:04.664089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:24:35.546  [2024-12-09 16:35:04.664102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:24:35.546  [2024-12-09 16:35:04.664115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.546  [2024-12-09 16:35:04.671557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.546  [2024-12-09 16:35:04.671595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:35.546  [2024-12-09 16:35:04.671606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.395 ms
00:24:35.546  [2024-12-09 16:35:04.671618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.546  [2024-12-09 16:35:04.671708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.546  [2024-12-09 16:35:04.671725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:35.546  [2024-12-09 16:35:04.671736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.070 ms
00:24:35.546  [2024-12-09 16:35:04.671752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.546  [2024-12-09 16:35:04.671797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.546  [2024-12-09 16:35:04.671812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:24:35.546  [2024-12-09 16:35:04.671825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:24:35.546  [2024-12-09 16:35:04.671837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.546  [2024-12-09 16:35:04.671861] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:24:35.546  [2024-12-09 16:35:04.677108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.546  [2024-12-09 16:35:04.677139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:35.546  [2024-12-09 16:35:04.677155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.259 ms
00:24:35.546  [2024-12-09 16:35:04.677165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.546  [2024-12-09 16:35:04.677204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.546  [2024-12-09 16:35:04.677215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:24:35.546  [2024-12-09 16:35:04.677228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:24:35.546  [2024-12-09 16:35:04.677238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.546  [2024-12-09 16:35:04.677289] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:24:35.546  [2024-12-09 16:35:04.677418] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:24:35.546  [2024-12-09 16:35:04.677439] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:24:35.546  [2024-12-09 16:35:04.677452] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:24:35.546  [2024-12-09 16:35:04.677485] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:24:35.546  [2024-12-09 16:35:04.677497] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:24:35.546  [2024-12-09 16:35:04.677511] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:24:35.546  [2024-12-09 16:35:04.677521] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:24:35.546  [2024-12-09 16:35:04.677538] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:24:35.546  [2024-12-09 16:35:04.677548] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:24:35.546  [2024-12-09 16:35:04.677561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.546  [2024-12-09 16:35:04.677583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:24:35.546  [2024-12-09 16:35:04.677598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.274 ms
00:24:35.546  [2024-12-09 16:35:04.677609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.547  [2024-12-09 16:35:04.677686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.547  [2024-12-09 16:35:04.677698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:24:35.547  [2024-12-09 16:35:04.677710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.057 ms
00:24:35.547  [2024-12-09 16:35:04.677721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.547  [2024-12-09 16:35:04.677816] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:24:35.547  [2024-12-09 16:35:04.677848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:24:35.547  [2024-12-09 16:35:04.677862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:35.547  [2024-12-09 16:35:04.677874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:35.547  [2024-12-09 16:35:04.677886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:24:35.547  [2024-12-09 16:35:04.677909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:24:35.547  [2024-12-09 16:35:04.677922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:24:35.547  [2024-12-09 16:35:04.677931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:24:35.547  [2024-12-09 16:35:04.677943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:24:35.547  [2024-12-09 16:35:04.677953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:35.547  [2024-12-09 16:35:04.677967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:24:35.547  [2024-12-09 16:35:04.677977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:24:35.547  [2024-12-09 16:35:04.677989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:35.547  [2024-12-09 16:35:04.677999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:24:35.547  [2024-12-09 16:35:04.678011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:24:35.547  [2024-12-09 16:35:04.678022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:35.547  [2024-12-09 16:35:04.678036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:24:35.547  [2024-12-09 16:35:04.678045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:24:35.547  [2024-12-09 16:35:04.678057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:35.547  [2024-12-09 16:35:04.678066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:24:35.547  [2024-12-09 16:35:04.678079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:24:35.547  [2024-12-09 16:35:04.678088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:35.547  [2024-12-09 16:35:04.678099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:24:35.547  [2024-12-09 16:35:04.678109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:24:35.547  [2024-12-09 16:35:04.678120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:35.547  [2024-12-09 16:35:04.678129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:24:35.547  [2024-12-09 16:35:04.678142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:24:35.547  [2024-12-09 16:35:04.678151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:35.547  [2024-12-09 16:35:04.678162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:24:35.547  [2024-12-09 16:35:04.678171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:24:35.547  [2024-12-09 16:35:04.678182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:35.547  [2024-12-09 16:35:04.678192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:24:35.547  [2024-12-09 16:35:04.678205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:24:35.547  [2024-12-09 16:35:04.678216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:35.547  [2024-12-09 16:35:04.678228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:24:35.547  [2024-12-09 16:35:04.678237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:24:35.547  [2024-12-09 16:35:04.678250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:35.547  [2024-12-09 16:35:04.678259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:24:35.547  [2024-12-09 16:35:04.678270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:24:35.547  [2024-12-09 16:35:04.678279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:35.547  [2024-12-09 16:35:04.678291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:24:35.547  [2024-12-09 16:35:04.678300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:24:35.547  [2024-12-09 16:35:04.678311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:35.547  [2024-12-09 16:35:04.678321] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:24:35.547  [2024-12-09 16:35:04.678334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:24:35.547  [2024-12-09 16:35:04.678345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:35.547  [2024-12-09 16:35:04.678357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:35.547  [2024-12-09 16:35:04.678367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:24:35.547  [2024-12-09 16:35:04.678381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:24:35.547  [2024-12-09 16:35:04.678391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:24:35.547  [2024-12-09 16:35:04.678403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:24:35.547  [2024-12-09 16:35:04.678412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:24:35.547  [2024-12-09 16:35:04.678424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:24:35.547  [2024-12-09 16:35:04.678437] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:24:35.547  [2024-12-09 16:35:04.678454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:35.547  [2024-12-09 16:35:04.678466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:24:35.547  [2024-12-09 16:35:04.678478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:24:35.547  [2024-12-09 16:35:04.678489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:24:35.547  [2024-12-09 16:35:04.678501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:24:35.547  [2024-12-09 16:35:04.678512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:24:35.547  [2024-12-09 16:35:04.678524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:24:35.547  [2024-12-09 16:35:04.678535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:24:35.547  [2024-12-09 16:35:04.678550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:24:35.547  [2024-12-09 16:35:04.678560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:24:35.547  [2024-12-09 16:35:04.678575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:24:35.547  [2024-12-09 16:35:04.678585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:24:35.547  [2024-12-09 16:35:04.678597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:24:35.547  [2024-12-09 16:35:04.678608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:24:35.547  [2024-12-09 16:35:04.678621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:24:35.547  [2024-12-09 16:35:04.678631] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:24:35.547  [2024-12-09 16:35:04.678645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:35.547  [2024-12-09 16:35:04.678656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:24:35.547  [2024-12-09 16:35:04.678669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:24:35.547  [2024-12-09 16:35:04.678679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:24:35.547  [2024-12-09 16:35:04.678692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:24:35.547  [2024-12-09 16:35:04.678703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:35.547  [2024-12-09 16:35:04.678715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:24:35.547  [2024-12-09 16:35:04.678729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.946 ms
00:24:35.547  [2024-12-09 16:35:04.678742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:35.547  [2024-12-09 16:35:04.678783] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:24:35.547  [2024-12-09 16:35:04.678801] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:24:39.747  [2024-12-09 16:35:08.576333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.747  [2024-12-09 16:35:08.576400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:24:39.747  [2024-12-09 16:35:08.576416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3903.877 ms
00:24:39.747  [2024-12-09 16:35:08.576429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.747  [2024-12-09 16:35:08.613741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.613812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:39.748  [2024-12-09 16:35:08.613828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.921 ms
00:24:39.748  [2024-12-09 16:35:08.613841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.613974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.613992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:24:39.748  [2024-12-09 16:35:08.614004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.066 ms
00:24:39.748  [2024-12-09 16:35:08.614024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.658605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.658653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:39.748  [2024-12-09 16:35:08.658667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 44.601 ms
00:24:39.748  [2024-12-09 16:35:08.658680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.658713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.658733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:39.748  [2024-12-09 16:35:08.658743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.002 ms
00:24:39.748  [2024-12-09 16:35:08.658766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.659290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.659318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:39.748  [2024-12-09 16:35:08.659330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.455 ms
00:24:39.748  [2024-12-09 16:35:08.659343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.659440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.659454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:39.748  [2024-12-09 16:35:08.659468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.076 ms
00:24:39.748  [2024-12-09 16:35:08.659483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.679761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.679806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:39.748  [2024-12-09 16:35:08.679835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.290 ms
00:24:39.748  [2024-12-09 16:35:08.679848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.715432] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:24:39.748  [2024-12-09 16:35:08.719526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.719565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:24:39.748  [2024-12-09 16:35:08.719584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 39.645 ms
00:24:39.748  [2024-12-09 16:35:08.719597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.814268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.814322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:24:39.748  [2024-12-09 16:35:08.814357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 94.775 ms
00:24:39.748  [2024-12-09 16:35:08.814368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.814552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.814571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:24:39.748  [2024-12-09 16:35:08.814588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.136 ms
00:24:39.748  [2024-12-09 16:35:08.814598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.849957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.850010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:24:39.748  [2024-12-09 16:35:08.850027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.360 ms
00:24:39.748  [2024-12-09 16:35:08.850037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.884235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.884272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:24:39.748  [2024-12-09 16:35:08.884299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.204 ms
00:24:39.748  [2024-12-09 16:35:08.884325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:39.748  [2024-12-09 16:35:08.885116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:39.748  [2024-12-09 16:35:08.885149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:24:39.748  [2024-12-09 16:35:08.885164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.751 ms
00:24:39.748  [2024-12-09 16:35:08.885177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.006  [2024-12-09 16:35:08.981182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.006  [2024-12-09 16:35:08.981223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:24:40.006  [2024-12-09 16:35:08.981243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 96.101 ms
00:24:40.006  [2024-12-09 16:35:08.981253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.006  [2024-12-09 16:35:09.017214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.006  [2024-12-09 16:35:09.017255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:24:40.006  [2024-12-09 16:35:09.017270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.923 ms
00:24:40.006  [2024-12-09 16:35:09.017280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.006  [2024-12-09 16:35:09.051014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.006  [2024-12-09 16:35:09.051059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:24:40.006  [2024-12-09 16:35:09.051103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.745 ms
00:24:40.006  [2024-12-09 16:35:09.051114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.006  [2024-12-09 16:35:09.086568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.006  [2024-12-09 16:35:09.086608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:24:40.006  [2024-12-09 16:35:09.086625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.469 ms
00:24:40.006  [2024-12-09 16:35:09.086635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.007  [2024-12-09 16:35:09.086682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.007  [2024-12-09 16:35:09.086694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:24:40.007  [2024-12-09 16:35:09.086711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:24:40.007  [2024-12-09 16:35:09.086721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.007  [2024-12-09 16:35:09.086833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.007  [2024-12-09 16:35:09.086849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:24:40.007  [2024-12-09 16:35:09.086862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.032 ms
00:24:40.007  [2024-12-09 16:35:09.086872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.007  [2024-12-09 16:35:09.087843] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4433.361 ms, result 0
00:24:40.007  {
00:24:40.007    "name": "ftl0",
00:24:40.007    "uuid": "3fd7dc7f-5371-4379-8893-54820b2eff53"
00:24:40.007  }
00:24:40.007   16:35:09 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": ['
00:24:40.007   16:35:09 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:24:40.264   16:35:09 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}'
00:24:40.264   16:35:09 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:24:40.523  [2024-12-09 16:35:09.522485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.523  [2024-12-09 16:35:09.522535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:24:40.523  [2024-12-09 16:35:09.522548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.002 ms
00:24:40.523  [2024-12-09 16:35:09.522560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.523  [2024-12-09 16:35:09.522583] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:24:40.523  [2024-12-09 16:35:09.526982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.523  [2024-12-09 16:35:09.527017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:24:40.523  [2024-12-09 16:35:09.527032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.385 ms
00:24:40.523  [2024-12-09 16:35:09.527042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.523  [2024-12-09 16:35:09.527278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.523  [2024-12-09 16:35:09.527296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:24:40.523  [2024-12-09 16:35:09.527309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.205 ms
00:24:40.523  [2024-12-09 16:35:09.527334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.523  [2024-12-09 16:35:09.529817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.523  [2024-12-09 16:35:09.529843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:24:40.523  [2024-12-09 16:35:09.529856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.468 ms
00:24:40.523  [2024-12-09 16:35:09.529882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.523  [2024-12-09 16:35:09.534669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.523  [2024-12-09 16:35:09.534701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:24:40.523  [2024-12-09 16:35:09.534735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.773 ms
00:24:40.523  [2024-12-09 16:35:09.534744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.523  [2024-12-09 16:35:09.568634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.523  [2024-12-09 16:35:09.568672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:24:40.523  [2024-12-09 16:35:09.568687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.894 ms
00:24:40.523  [2024-12-09 16:35:09.568696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.523  [2024-12-09 16:35:09.589543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.523  [2024-12-09 16:35:09.589582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:24:40.523  [2024-12-09 16:35:09.589599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.834 ms
00:24:40.523  [2024-12-09 16:35:09.589609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.523  [2024-12-09 16:35:09.589766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.523  [2024-12-09 16:35:09.589780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:24:40.523  [2024-12-09 16:35:09.589794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.114 ms
00:24:40.523  [2024-12-09 16:35:09.589804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.523  [2024-12-09 16:35:09.623408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.523  [2024-12-09 16:35:09.623447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:24:40.523  [2024-12-09 16:35:09.623478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.631 ms
00:24:40.523  [2024-12-09 16:35:09.623488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.523  [2024-12-09 16:35:09.656833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.523  [2024-12-09 16:35:09.656871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:24:40.523  [2024-12-09 16:35:09.656886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.356 ms
00:24:40.523  [2024-12-09 16:35:09.656901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.523  [2024-12-09 16:35:09.689954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.523  [2024-12-09 16:35:09.689989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:24:40.523  [2024-12-09 16:35:09.690020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.060 ms
00:24:40.523  [2024-12-09 16:35:09.690030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.783  [2024-12-09 16:35:09.723401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.783  [2024-12-09 16:35:09.723438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:24:40.783  [2024-12-09 16:35:09.723453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.324 ms
00:24:40.783  [2024-12-09 16:35:09.723462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.783  [2024-12-09 16:35:09.723502] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:24:40.783  [2024-12-09 16:35:09.723518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.783  [2024-12-09 16:35:09.723663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.723991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:24:40.784  [2024-12-09 16:35:09.724787] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:24:40.784  [2024-12-09 16:35:09.724799] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         3fd7dc7f-5371-4379-8893-54820b2eff53
00:24:40.784  [2024-12-09 16:35:09.724810] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:24:40.784  [2024-12-09 16:35:09.724824] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:24:40.785  [2024-12-09 16:35:09.724838] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:24:40.785  [2024-12-09 16:35:09.724851] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:24:40.785  [2024-12-09 16:35:09.724860] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:24:40.785  [2024-12-09 16:35:09.724873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:24:40.785  [2024-12-09 16:35:09.724883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:24:40.785  [2024-12-09 16:35:09.724902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:24:40.785  [2024-12-09 16:35:09.724912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:24:40.785  [2024-12-09 16:35:09.724924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.785  [2024-12-09 16:35:09.724937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:24:40.785  [2024-12-09 16:35:09.724950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.425 ms
00:24:40.785  [2024-12-09 16:35:09.724963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.785  [2024-12-09 16:35:09.744038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.785  [2024-12-09 16:35:09.744072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:24:40.785  [2024-12-09 16:35:09.744087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.043 ms
00:24:40.785  [2024-12-09 16:35:09.744096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.785  [2024-12-09 16:35:09.744656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:40.785  [2024-12-09 16:35:09.744677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:24:40.785  [2024-12-09 16:35:09.744694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.532 ms
00:24:40.785  [2024-12-09 16:35:09.744706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.785  [2024-12-09 16:35:09.806763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:40.785  [2024-12-09 16:35:09.806799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:40.785  [2024-12-09 16:35:09.806814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:40.785  [2024-12-09 16:35:09.806824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.785  [2024-12-09 16:35:09.806875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:40.785  [2024-12-09 16:35:09.806885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:40.785  [2024-12-09 16:35:09.806908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:40.785  [2024-12-09 16:35:09.806918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.785  [2024-12-09 16:35:09.807024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:40.785  [2024-12-09 16:35:09.807038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:40.785  [2024-12-09 16:35:09.807051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:40.785  [2024-12-09 16:35:09.807060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.785  [2024-12-09 16:35:09.807083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:40.785  [2024-12-09 16:35:09.807094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:40.785  [2024-12-09 16:35:09.807106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:40.785  [2024-12-09 16:35:09.807119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:40.785  [2024-12-09 16:35:09.924910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:40.785  [2024-12-09 16:35:09.924958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:40.785  [2024-12-09 16:35:09.924992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:40.785  [2024-12-09 16:35:09.925011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.045  [2024-12-09 16:35:10.021073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.045  [2024-12-09 16:35:10.021122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:41.045  [2024-12-09 16:35:10.021138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.045  [2024-12-09 16:35:10.021151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.045  [2024-12-09 16:35:10.021251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.045  [2024-12-09 16:35:10.021263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:41.045  [2024-12-09 16:35:10.021275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.045  [2024-12-09 16:35:10.021285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.045  [2024-12-09 16:35:10.021339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.045  [2024-12-09 16:35:10.021350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:41.045  [2024-12-09 16:35:10.021362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.045  [2024-12-09 16:35:10.021371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.045  [2024-12-09 16:35:10.021529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.045  [2024-12-09 16:35:10.021544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:41.045  [2024-12-09 16:35:10.021556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.045  [2024-12-09 16:35:10.021567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.045  [2024-12-09 16:35:10.021608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.045  [2024-12-09 16:35:10.021621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:24:41.045  [2024-12-09 16:35:10.021634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.045  [2024-12-09 16:35:10.021644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.045  [2024-12-09 16:35:10.021689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.045  [2024-12-09 16:35:10.021701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:41.045  [2024-12-09 16:35:10.021713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.045  [2024-12-09 16:35:10.021724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.045  [2024-12-09 16:35:10.021774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:41.045  [2024-12-09 16:35:10.021793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:41.045  [2024-12-09 16:35:10.021806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:41.045  [2024-12-09 16:35:10.021816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:41.045  [2024-12-09 16:35:10.021973] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.240 ms, result 0
00:24:41.045  true
00:24:41.045   16:35:10 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80125
00:24:41.045   16:35:10 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 80125 ']'
00:24:41.045   16:35:10 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 80125
00:24:41.045    16:35:10 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname
00:24:41.045   16:35:10 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:41.045    16:35:10 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80125
00:24:41.045   16:35:10 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:24:41.045   16:35:10 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:24:41.045  killing process with pid 80125
00:24:41.045   16:35:10 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80125'
00:24:41.045   16:35:10 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 80125
00:24:41.045   16:35:10 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 80125
00:24:44.338   16:35:13 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K
00:24:48.532  262144+0 records in
00:24:48.532  262144+0 records out
00:24:48.532  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.97747 s, 270 MB/s
00:24:48.532   16:35:17 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:24:49.912   16:35:18 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:24:49.912  [2024-12-09 16:35:18.953221] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:24:49.912  [2024-12-09 16:35:18.953349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80363 ]
00:24:50.171  [2024-12-09 16:35:19.136523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:50.171  [2024-12-09 16:35:19.251921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:24:50.430  [2024-12-09 16:35:19.597682] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:50.430  [2024-12-09 16:35:19.597770] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:50.691  [2024-12-09 16:35:19.761989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.762041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:24:50.691  [2024-12-09 16:35:19.762057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.021 ms
00:24:50.691  [2024-12-09 16:35:19.762067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.762114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.762129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:50.691  [2024-12-09 16:35:19.762140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.027 ms
00:24:50.691  [2024-12-09 16:35:19.762149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.762170] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:24:50.691  [2024-12-09 16:35:19.763145] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:24:50.691  [2024-12-09 16:35:19.763174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.763184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:50.691  [2024-12-09 16:35:19.763195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.009 ms
00:24:50.691  [2024-12-09 16:35:19.763205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.764632] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:24:50.691  [2024-12-09 16:35:19.782753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.782808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:24:50.691  [2024-12-09 16:35:19.782838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.150 ms
00:24:50.691  [2024-12-09 16:35:19.782849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.782933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.782946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:24:50.691  [2024-12-09 16:35:19.782958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.039 ms
00:24:50.691  [2024-12-09 16:35:19.782968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.789712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.789740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:50.691  [2024-12-09 16:35:19.789751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.686 ms
00:24:50.691  [2024-12-09 16:35:19.789765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.789853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.789866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:50.691  [2024-12-09 16:35:19.789877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.056 ms
00:24:50.691  [2024-12-09 16:35:19.789888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.789934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.789947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:24:50.691  [2024-12-09 16:35:19.789956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:24:50.691  [2024-12-09 16:35:19.789965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.789992] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:24:50.691  [2024-12-09 16:35:19.794693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.794721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:50.691  [2024-12-09 16:35:19.794736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.714 ms
00:24:50.691  [2024-12-09 16:35:19.794745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.794794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.794805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:24:50.691  [2024-12-09 16:35:19.794815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:24:50.691  [2024-12-09 16:35:19.794825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.794874] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:24:50.691  [2024-12-09 16:35:19.794898] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:24:50.691  [2024-12-09 16:35:19.794942] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:24:50.691  [2024-12-09 16:35:19.794962] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:24:50.691  [2024-12-09 16:35:19.795048] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:24:50.691  [2024-12-09 16:35:19.795061] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:24:50.691  [2024-12-09 16:35:19.795073] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:24:50.691  [2024-12-09 16:35:19.795086] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:24:50.691  [2024-12-09 16:35:19.795098] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:24:50.691  [2024-12-09 16:35:19.795108] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:24:50.691  [2024-12-09 16:35:19.795118] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:24:50.691  [2024-12-09 16:35:19.795130] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:24:50.691  [2024-12-09 16:35:19.795139] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:24:50.691  [2024-12-09 16:35:19.795164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.795175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:24:50.691  [2024-12-09 16:35:19.795185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.293 ms
00:24:50.691  [2024-12-09 16:35:19.795194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.795265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.691  [2024-12-09 16:35:19.795276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:24:50.691  [2024-12-09 16:35:19.795285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.054 ms
00:24:50.691  [2024-12-09 16:35:19.795295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.691  [2024-12-09 16:35:19.795390] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:24:50.691  [2024-12-09 16:35:19.795405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:24:50.691  [2024-12-09 16:35:19.795416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:50.691  [2024-12-09 16:35:19.795426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:50.691  [2024-12-09 16:35:19.795436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:24:50.691  [2024-12-09 16:35:19.795445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:24:50.691  [2024-12-09 16:35:19.795454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:24:50.691  [2024-12-09 16:35:19.795464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:24:50.691  [2024-12-09 16:35:19.795474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:24:50.691  [2024-12-09 16:35:19.795483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:50.691  [2024-12-09 16:35:19.795494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:24:50.691  [2024-12-09 16:35:19.795504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:24:50.691  [2024-12-09 16:35:19.795513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:50.691  [2024-12-09 16:35:19.795531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:24:50.691  [2024-12-09 16:35:19.795541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:24:50.691  [2024-12-09 16:35:19.795550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:50.691  [2024-12-09 16:35:19.795560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:24:50.691  [2024-12-09 16:35:19.795569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:24:50.691  [2024-12-09 16:35:19.795578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:50.691  [2024-12-09 16:35:19.795588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:24:50.691  [2024-12-09 16:35:19.795597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:24:50.691  [2024-12-09 16:35:19.795607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:50.691  [2024-12-09 16:35:19.795616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:24:50.691  [2024-12-09 16:35:19.795625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:24:50.691  [2024-12-09 16:35:19.795634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:50.691  [2024-12-09 16:35:19.795643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:24:50.691  [2024-12-09 16:35:19.795652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:24:50.691  [2024-12-09 16:35:19.795661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:50.691  [2024-12-09 16:35:19.795669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:24:50.691  [2024-12-09 16:35:19.795678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:24:50.691  [2024-12-09 16:35:19.795687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:50.691  [2024-12-09 16:35:19.795697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:24:50.691  [2024-12-09 16:35:19.795705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:24:50.691  [2024-12-09 16:35:19.795714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:50.691  [2024-12-09 16:35:19.795723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:24:50.691  [2024-12-09 16:35:19.795732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:24:50.691  [2024-12-09 16:35:19.795741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:50.691  [2024-12-09 16:35:19.795750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:24:50.691  [2024-12-09 16:35:19.795758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:24:50.691  [2024-12-09 16:35:19.795767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:50.692  [2024-12-09 16:35:19.795776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:24:50.692  [2024-12-09 16:35:19.795785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:24:50.692  [2024-12-09 16:35:19.795795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:50.692  [2024-12-09 16:35:19.795804] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:24:50.692  [2024-12-09 16:35:19.795814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:24:50.692  [2024-12-09 16:35:19.795824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:50.692  [2024-12-09 16:35:19.795833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:50.692  [2024-12-09 16:35:19.795843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:24:50.692  [2024-12-09 16:35:19.795852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:24:50.692  [2024-12-09 16:35:19.795861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:24:50.692  [2024-12-09 16:35:19.795871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:24:50.692  [2024-12-09 16:35:19.795880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:24:50.692  [2024-12-09 16:35:19.795889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:24:50.692  [2024-12-09 16:35:19.795899] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:24:50.692  [2024-12-09 16:35:19.795921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:50.692  [2024-12-09 16:35:19.795936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:24:50.692  [2024-12-09 16:35:19.795947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:24:50.692  [2024-12-09 16:35:19.795958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:24:50.692  [2024-12-09 16:35:19.795968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:24:50.692  [2024-12-09 16:35:19.795978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:24:50.692  [2024-12-09 16:35:19.795989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:24:50.692  [2024-12-09 16:35:19.795999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:24:50.692  [2024-12-09 16:35:19.796010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:24:50.692  [2024-12-09 16:35:19.796020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:24:50.692  [2024-12-09 16:35:19.796030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:24:50.692  [2024-12-09 16:35:19.796040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:24:50.692  [2024-12-09 16:35:19.796050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:24:50.692  [2024-12-09 16:35:19.796061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:24:50.692  [2024-12-09 16:35:19.796071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:24:50.692  [2024-12-09 16:35:19.796081] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:24:50.692  [2024-12-09 16:35:19.796092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:50.692  [2024-12-09 16:35:19.796103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:24:50.692  [2024-12-09 16:35:19.796113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:24:50.692  [2024-12-09 16:35:19.796123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:24:50.692  [2024-12-09 16:35:19.796134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:24:50.692  [2024-12-09 16:35:19.796145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.692  [2024-12-09 16:35:19.796155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:24:50.692  [2024-12-09 16:35:19.796166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.810 ms
00:24:50.692  [2024-12-09 16:35:19.796176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.692  [2024-12-09 16:35:19.832809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.692  [2024-12-09 16:35:19.832843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:50.692  [2024-12-09 16:35:19.832855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.645 ms
00:24:50.692  [2024-12-09 16:35:19.832869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.692  [2024-12-09 16:35:19.832967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.692  [2024-12-09 16:35:19.832979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:24:50.692  [2024-12-09 16:35:19.832989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.058 ms
00:24:50.692  [2024-12-09 16:35:19.832999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:19.904322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:19.904361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:50.952  [2024-12-09 16:35:19.904374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 71.369 ms
00:24:50.952  [2024-12-09 16:35:19.904385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:19.904442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:19.904454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:50.952  [2024-12-09 16:35:19.904473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:24:50.952  [2024-12-09 16:35:19.904483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:19.904996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:19.905027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:50.952  [2024-12-09 16:35:19.905039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.443 ms
00:24:50.952  [2024-12-09 16:35:19.905049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:19.905170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:19.905185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:50.952  [2024-12-09 16:35:19.905204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.098 ms
00:24:50.952  [2024-12-09 16:35:19.905214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:19.922657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:19.922696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:50.952  [2024-12-09 16:35:19.922725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.451 ms
00:24:50.952  [2024-12-09 16:35:19.922735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:19.941107] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4
00:24:50.952  [2024-12-09 16:35:19.941145] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:24:50.952  [2024-12-09 16:35:19.941160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:19.941170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:24:50.952  [2024-12-09 16:35:19.941180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.352 ms
00:24:50.952  [2024-12-09 16:35:19.941205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:19.968612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:19.968660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:24:50.952  [2024-12-09 16:35:19.968673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.407 ms
00:24:50.952  [2024-12-09 16:35:19.968682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:19.986425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:19.986463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:24:50.952  [2024-12-09 16:35:19.986491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.713 ms
00:24:50.952  [2024-12-09 16:35:19.986501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:20.003587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:20.003627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:24:50.952  [2024-12-09 16:35:20.003639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.074 ms
00:24:50.952  [2024-12-09 16:35:20.003649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:20.004398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:20.004431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:24:50.952  [2024-12-09 16:35:20.004443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.648 ms
00:24:50.952  [2024-12-09 16:35:20.004460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:20.086239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:20.086297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:24:50.952  [2024-12-09 16:35:20.086313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 81.889 ms
00:24:50.952  [2024-12-09 16:35:20.086350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:20.096536] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:24:50.952  [2024-12-09 16:35:20.098782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:20.098813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:24:50.952  [2024-12-09 16:35:20.098824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.405 ms
00:24:50.952  [2024-12-09 16:35:20.098850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.952  [2024-12-09 16:35:20.098941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.952  [2024-12-09 16:35:20.098955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:24:50.953  [2024-12-09 16:35:20.098966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:24:50.953  [2024-12-09 16:35:20.098976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.953  [2024-12-09 16:35:20.099071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.953  [2024-12-09 16:35:20.099083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:24:50.953  [2024-12-09 16:35:20.099094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:24:50.953  [2024-12-09 16:35:20.099104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.953  [2024-12-09 16:35:20.099125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.953  [2024-12-09 16:35:20.099135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:24:50.953  [2024-12-09 16:35:20.099145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:24:50.953  [2024-12-09 16:35:20.099155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:50.953  [2024-12-09 16:35:20.099194] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:24:50.953  [2024-12-09 16:35:20.099213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:50.953  [2024-12-09 16:35:20.099223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:24:50.953  [2024-12-09 16:35:20.099234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:24:50.953  [2024-12-09 16:35:20.099244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:51.212  [2024-12-09 16:35:20.133411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:51.212  [2024-12-09 16:35:20.133448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:24:51.212  [2024-12-09 16:35:20.133462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.204 ms
00:24:51.212  [2024-12-09 16:35:20.133481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:51.212  [2024-12-09 16:35:20.133568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:51.212  [2024-12-09 16:35:20.133580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:24:51.212  [2024-12-09 16:35:20.133591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.032 ms
00:24:51.212  [2024-12-09 16:35:20.133601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:51.212  [2024-12-09 16:35:20.134722] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.877 ms, result 0
00:24:52.150  
[2024-12-09T16:35:22.267Z] Copying: 23/1024 [MB] (23 MBps)
[2024-12-09T16:35:23.205Z] Copying: 47/1024 [MB] (23 MBps)
[2024-12-09T16:35:24.583Z] Copying: 70/1024 [MB] (23 MBps)
[2024-12-09T16:35:25.150Z] Copying: 93/1024 [MB] (22 MBps)
[2024-12-09T16:35:26.526Z] Copying: 116/1024 [MB] (23 MBps)
[2024-12-09T16:35:27.195Z] Copying: 140/1024 [MB] (24 MBps)
[2024-12-09T16:35:28.139Z] Copying: 165/1024 [MB] (24 MBps)
[2024-12-09T16:35:29.518Z] Copying: 189/1024 [MB] (24 MBps)
[2024-12-09T16:35:30.457Z] Copying: 212/1024 [MB] (22 MBps)
[2024-12-09T16:35:31.395Z] Copying: 236/1024 [MB] (23 MBps)
[2024-12-09T16:35:32.333Z] Copying: 259/1024 [MB] (23 MBps)
[2024-12-09T16:35:33.272Z] Copying: 281/1024 [MB] (22 MBps)
[2024-12-09T16:35:34.210Z] Copying: 305/1024 [MB] (23 MBps)
[2024-12-09T16:35:35.149Z] Copying: 329/1024 [MB] (23 MBps)
[2024-12-09T16:35:36.528Z] Copying: 353/1024 [MB] (23 MBps)
[2024-12-09T16:35:37.466Z] Copying: 377/1024 [MB] (23 MBps)
[2024-12-09T16:35:38.404Z] Copying: 400/1024 [MB] (22 MBps)
[2024-12-09T16:35:39.343Z] Copying: 423/1024 [MB] (23 MBps)
[2024-12-09T16:35:40.281Z] Copying: 447/1024 [MB] (23 MBps)
[2024-12-09T16:35:41.219Z] Copying: 471/1024 [MB] (23 MBps)
[2024-12-09T16:35:42.158Z] Copying: 495/1024 [MB] (23 MBps)
[2024-12-09T16:35:43.537Z] Copying: 518/1024 [MB] (23 MBps)
[2024-12-09T16:35:44.474Z] Copying: 542/1024 [MB] (23 MBps)
[2024-12-09T16:35:45.412Z] Copying: 565/1024 [MB] (22 MBps)
[2024-12-09T16:35:46.350Z] Copying: 589/1024 [MB] (23 MBps)
[2024-12-09T16:35:47.289Z] Copying: 612/1024 [MB] (23 MBps)
[2024-12-09T16:35:48.226Z] Copying: 635/1024 [MB] (23 MBps)
[2024-12-09T16:35:49.164Z] Copying: 659/1024 [MB] (23 MBps)
[2024-12-09T16:35:50.102Z] Copying: 683/1024 [MB] (23 MBps)
[2024-12-09T16:35:51.481Z] Copying: 706/1024 [MB] (23 MBps)
[2024-12-09T16:35:52.419Z] Copying: 729/1024 [MB] (22 MBps)
[2024-12-09T16:35:53.357Z] Copying: 752/1024 [MB] (22 MBps)
[2024-12-09T16:35:54.295Z] Copying: 775/1024 [MB] (22 MBps)
[2024-12-09T16:35:55.234Z] Copying: 798/1024 [MB] (22 MBps)
[2024-12-09T16:35:56.172Z] Copying: 821/1024 [MB] (23 MBps)
[2024-12-09T16:35:57.110Z] Copying: 844/1024 [MB] (23 MBps)
[2024-12-09T16:35:58.490Z] Copying: 867/1024 [MB] (22 MBps)
[2024-12-09T16:35:59.115Z] Copying: 890/1024 [MB] (23 MBps)
[2024-12-09T16:36:00.494Z] Copying: 914/1024 [MB] (23 MBps)
[2024-12-09T16:36:01.435Z] Copying: 937/1024 [MB] (23 MBps)
[2024-12-09T16:36:02.372Z] Copying: 961/1024 [MB] (23 MBps)
[2024-12-09T16:36:03.319Z] Copying: 984/1024 [MB] (23 MBps)
[2024-12-09T16:36:03.889Z] Copying: 1008/1024 [MB] (24 MBps)
[2024-12-09T16:36:03.889Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-09 16:36:03.771199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.710  [2024-12-09 16:36:03.771265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:25:34.710  [2024-12-09 16:36:03.771282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:25:34.710  [2024-12-09 16:36:03.771292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.710  [2024-12-09 16:36:03.771322] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:25:34.710  [2024-12-09 16:36:03.775576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.710  [2024-12-09 16:36:03.775611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:25:34.710  [2024-12-09 16:36:03.775630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.243 ms
00:25:34.710  [2024-12-09 16:36:03.775639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.710  [2024-12-09 16:36:03.777528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.710  [2024-12-09 16:36:03.777570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:25:34.710  [2024-12-09 16:36:03.777582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.866 ms
00:25:34.710  [2024-12-09 16:36:03.777592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.710  [2024-12-09 16:36:03.795699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.710  [2024-12-09 16:36:03.795744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:25:34.710  [2024-12-09 16:36:03.795757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.119 ms
00:25:34.710  [2024-12-09 16:36:03.795767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.710  [2024-12-09 16:36:03.800650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.710  [2024-12-09 16:36:03.800684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:25:34.710  [2024-12-09 16:36:03.800696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.835 ms
00:25:34.710  [2024-12-09 16:36:03.800722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.710  [2024-12-09 16:36:03.835659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.710  [2024-12-09 16:36:03.835697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:25:34.710  [2024-12-09 16:36:03.835710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.940 ms
00:25:34.710  [2024-12-09 16:36:03.835719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.710  [2024-12-09 16:36:03.856026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.710  [2024-12-09 16:36:03.856064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:25:34.710  [2024-12-09 16:36:03.856077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.288 ms
00:25:34.710  [2024-12-09 16:36:03.856092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.710  [2024-12-09 16:36:03.856243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.710  [2024-12-09 16:36:03.856260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:25:34.710  [2024-12-09 16:36:03.856270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.097 ms
00:25:34.710  [2024-12-09 16:36:03.856279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.971  [2024-12-09 16:36:03.890695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.972  [2024-12-09 16:36:03.890733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:25:34.972  [2024-12-09 16:36:03.890761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.457 ms
00:25:34.972  [2024-12-09 16:36:03.890770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.972  [2024-12-09 16:36:03.925607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.972  [2024-12-09 16:36:03.925644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:25:34.972  [2024-12-09 16:36:03.925671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.856 ms
00:25:34.972  [2024-12-09 16:36:03.925681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.972  [2024-12-09 16:36:03.958929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.972  [2024-12-09 16:36:03.958965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:25:34.972  [2024-12-09 16:36:03.958992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.266 ms
00:25:34.972  [2024-12-09 16:36:03.959001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.972  [2024-12-09 16:36:03.992673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.972  [2024-12-09 16:36:03.992708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:25:34.972  [2024-12-09 16:36:03.992736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.637 ms
00:25:34.972  [2024-12-09 16:36:03.992745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.972  [2024-12-09 16:36:03.992780] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:25:34.972  [2024-12-09 16:36:03.992795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.992993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.972  [2024-12-09 16:36:03.993601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:25:34.973  [2024-12-09 16:36:03.993862] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:25:34.973  [2024-12-09 16:36:03.993876] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         3fd7dc7f-5371-4379-8893-54820b2eff53
00:25:34.973  [2024-12-09 16:36:03.993886] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:25:34.973  [2024-12-09 16:36:03.993896] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:25:34.973  [2024-12-09 16:36:03.993905] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:25:34.973  [2024-12-09 16:36:03.993923] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:25:34.973  [2024-12-09 16:36:03.993933] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:25:34.973  [2024-12-09 16:36:03.993953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:25:34.973  [2024-12-09 16:36:03.993962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:25:34.973  [2024-12-09 16:36:03.993971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:25:34.973  [2024-12-09 16:36:03.993980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:25:34.973  [2024-12-09 16:36:03.993990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.973  [2024-12-09 16:36:03.994001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:25:34.973  [2024-12-09 16:36:03.994011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.213 ms
00:25:34.973  [2024-12-09 16:36:03.994020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.973  [2024-12-09 16:36:04.013132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.973  [2024-12-09 16:36:04.013167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:25:34.973  [2024-12-09 16:36:04.013195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.106 ms
00:25:34.973  [2024-12-09 16:36:04.013204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.973  [2024-12-09 16:36:04.013749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:34.973  [2024-12-09 16:36:04.013765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:25:34.973  [2024-12-09 16:36:04.013776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.525 ms
00:25:34.973  [2024-12-09 16:36:04.013791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.973  [2024-12-09 16:36:04.062159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:34.973  [2024-12-09 16:36:04.062194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:25:34.973  [2024-12-09 16:36:04.062207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:34.973  [2024-12-09 16:36:04.062233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.973  [2024-12-09 16:36:04.062282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:34.973  [2024-12-09 16:36:04.062292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:25:34.973  [2024-12-09 16:36:04.062302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:34.973  [2024-12-09 16:36:04.062316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.973  [2024-12-09 16:36:04.062391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:34.973  [2024-12-09 16:36:04.062404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:25:34.973  [2024-12-09 16:36:04.062413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:34.973  [2024-12-09 16:36:04.062423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:34.973  [2024-12-09 16:36:04.062438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:34.973  [2024-12-09 16:36:04.062448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:25:34.973  [2024-12-09 16:36:04.062458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:34.973  [2024-12-09 16:36:04.062467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:35.234  [2024-12-09 16:36:04.185176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:35.234  [2024-12-09 16:36:04.185230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:25:35.234  [2024-12-09 16:36:04.185245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:35.234  [2024-12-09 16:36:04.185255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:35.234  [2024-12-09 16:36:04.282610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:35.234  [2024-12-09 16:36:04.282679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:25:35.234  [2024-12-09 16:36:04.282693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:35.234  [2024-12-09 16:36:04.282709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:35.234  [2024-12-09 16:36:04.282792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:35.234  [2024-12-09 16:36:04.282804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:25:35.234  [2024-12-09 16:36:04.282815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:35.234  [2024-12-09 16:36:04.282825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:35.234  [2024-12-09 16:36:04.282861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:35.234  [2024-12-09 16:36:04.282872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:25:35.234  [2024-12-09 16:36:04.282882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:35.234  [2024-12-09 16:36:04.282891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:35.234  [2024-12-09 16:36:04.283029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:35.234  [2024-12-09 16:36:04.283043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:25:35.234  [2024-12-09 16:36:04.283054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:35.234  [2024-12-09 16:36:04.283064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:35.234  [2024-12-09 16:36:04.283099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:35.234  [2024-12-09 16:36:04.283111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:25:35.234  [2024-12-09 16:36:04.283121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:35.234  [2024-12-09 16:36:04.283131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:35.234  [2024-12-09 16:36:04.283168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:35.234  [2024-12-09 16:36:04.283183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:25:35.234  [2024-12-09 16:36:04.283193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:35.234  [2024-12-09 16:36:04.283202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:35.234  [2024-12-09 16:36:04.283242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:35.234  [2024-12-09 16:36:04.283254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:25:35.234  [2024-12-09 16:36:04.283264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:35.234  [2024-12-09 16:36:04.283273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:35.234  [2024-12-09 16:36:04.283419] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 513.012 ms, result 0
00:25:36.613  
00:25:36.613  
00:25:36.613   16:36:05 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144
00:25:36.613  [2024-12-09 16:36:05.574628] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:25:36.613  [2024-12-09 16:36:05.574764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80836 ]
00:25:36.613  [2024-12-09 16:36:05.757816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:36.872  [2024-12-09 16:36:05.866490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:25:37.132  [2024-12-09 16:36:06.202756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:25:37.132  [2024-12-09 16:36:06.202828] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:25:37.393  [2024-12-09 16:36:06.362419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.362475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:25:37.393  [2024-12-09 16:36:06.362490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:25:37.393  [2024-12-09 16:36:06.362500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.362559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.362574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:25:37.393  [2024-12-09 16:36:06.362584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.026 ms
00:25:37.393  [2024-12-09 16:36:06.362594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.362614] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:25:37.393  [2024-12-09 16:36:06.363615] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:25:37.393  [2024-12-09 16:36:06.363647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.363658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:25:37.393  [2024-12-09 16:36:06.363669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.039 ms
00:25:37.393  [2024-12-09 16:36:06.363680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.365127] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:25:37.393  [2024-12-09 16:36:06.383363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.383403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:25:37.393  [2024-12-09 16:36:06.383416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.267 ms
00:25:37.393  [2024-12-09 16:36:06.383426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.383506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.383518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:25:37.393  [2024-12-09 16:36:06.383529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.022 ms
00:25:37.393  [2024-12-09 16:36:06.383538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.390306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.390334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:25:37.393  [2024-12-09 16:36:06.390345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.710 ms
00:25:37.393  [2024-12-09 16:36:06.390358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.390447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.390461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:25:37.393  [2024-12-09 16:36:06.390472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.057 ms
00:25:37.393  [2024-12-09 16:36:06.390481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.390519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.390531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:25:37.393  [2024-12-09 16:36:06.390541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:25:37.393  [2024-12-09 16:36:06.390551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.390577] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:25:37.393  [2024-12-09 16:36:06.395381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.395409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:25:37.393  [2024-12-09 16:36:06.395440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.816 ms
00:25:37.393  [2024-12-09 16:36:06.395450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.395482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.395493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:25:37.393  [2024-12-09 16:36:06.395504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:25:37.393  [2024-12-09 16:36:06.395514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.395565] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:25:37.393  [2024-12-09 16:36:06.395590] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:25:37.393  [2024-12-09 16:36:06.395623] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:25:37.393  [2024-12-09 16:36:06.395643] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:25:37.393  [2024-12-09 16:36:06.395746] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:25:37.393  [2024-12-09 16:36:06.395759] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:25:37.393  [2024-12-09 16:36:06.395772] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:25:37.393  [2024-12-09 16:36:06.395785] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:25:37.393  [2024-12-09 16:36:06.395797] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:25:37.393  [2024-12-09 16:36:06.395808] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:25:37.393  [2024-12-09 16:36:06.395818] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:25:37.393  [2024-12-09 16:36:06.395831] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:25:37.393  [2024-12-09 16:36:06.395841] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:25:37.393  [2024-12-09 16:36:06.395851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.395862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:25:37.393  [2024-12-09 16:36:06.395872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.289 ms
00:25:37.393  [2024-12-09 16:36:06.395881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.395965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.395977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:25:37.393  [2024-12-09 16:36:06.395987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.068 ms
00:25:37.393  [2024-12-09 16:36:06.395997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.393  [2024-12-09 16:36:06.396091] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:25:37.393  [2024-12-09 16:36:06.396111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:25:37.393  [2024-12-09 16:36:06.396122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:25:37.393  [2024-12-09 16:36:06.396132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:25:37.393  [2024-12-09 16:36:06.396151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:25:37.393  [2024-12-09 16:36:06.396170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:25:37.393  [2024-12-09 16:36:06.396179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:25:37.393  [2024-12-09 16:36:06.396198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:25:37.393  [2024-12-09 16:36:06.396207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:25:37.393  [2024-12-09 16:36:06.396217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:25:37.393  [2024-12-09 16:36:06.396235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:25:37.393  [2024-12-09 16:36:06.396244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:25:37.393  [2024-12-09 16:36:06.396254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:25:37.393  [2024-12-09 16:36:06.396273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:25:37.393  [2024-12-09 16:36:06.396282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:25:37.393  [2024-12-09 16:36:06.396301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:37.393  [2024-12-09 16:36:06.396319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:25:37.393  [2024-12-09 16:36:06.396328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:37.393  [2024-12-09 16:36:06.396346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:25:37.393  [2024-12-09 16:36:06.396355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:37.393  [2024-12-09 16:36:06.396373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:25:37.393  [2024-12-09 16:36:06.396383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:37.393  [2024-12-09 16:36:06.396402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:25:37.393  [2024-12-09 16:36:06.396411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:25:37.393  [2024-12-09 16:36:06.396428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:25:37.393  [2024-12-09 16:36:06.396437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:25:37.393  [2024-12-09 16:36:06.396446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:25:37.393  [2024-12-09 16:36:06.396455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:25:37.393  [2024-12-09 16:36:06.396464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:25:37.393  [2024-12-09 16:36:06.396473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:25:37.393  [2024-12-09 16:36:06.396490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:25:37.393  [2024-12-09 16:36:06.396499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396508] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:25:37.393  [2024-12-09 16:36:06.396518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:25:37.393  [2024-12-09 16:36:06.396527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:25:37.393  [2024-12-09 16:36:06.396537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:37.393  [2024-12-09 16:36:06.396547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:25:37.393  [2024-12-09 16:36:06.396556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:25:37.393  [2024-12-09 16:36:06.396565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:25:37.393  [2024-12-09 16:36:06.396575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:25:37.393  [2024-12-09 16:36:06.396583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:25:37.393  [2024-12-09 16:36:06.396592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:25:37.393  [2024-12-09 16:36:06.396603] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:25:37.393  [2024-12-09 16:36:06.396615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:25:37.393  [2024-12-09 16:36:06.396630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:25:37.393  [2024-12-09 16:36:06.396640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:25:37.393  [2024-12-09 16:36:06.396651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:25:37.393  [2024-12-09 16:36:06.396662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:25:37.393  [2024-12-09 16:36:06.396672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:25:37.393  [2024-12-09 16:36:06.396684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:25:37.393  [2024-12-09 16:36:06.396694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:25:37.393  [2024-12-09 16:36:06.396706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:25:37.393  [2024-12-09 16:36:06.396716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:25:37.393  [2024-12-09 16:36:06.396725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:25:37.393  [2024-12-09 16:36:06.396735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:25:37.393  [2024-12-09 16:36:06.396745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:25:37.393  [2024-12-09 16:36:06.396755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:25:37.393  [2024-12-09 16:36:06.396766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:25:37.393  [2024-12-09 16:36:06.396775] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:25:37.393  [2024-12-09 16:36:06.396786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:25:37.393  [2024-12-09 16:36:06.396797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:25:37.393  [2024-12-09 16:36:06.396807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:25:37.393  [2024-12-09 16:36:06.396817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:25:37.393  [2024-12-09 16:36:06.396827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:25:37.393  [2024-12-09 16:36:06.396837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.393  [2024-12-09 16:36:06.396847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:25:37.394  [2024-12-09 16:36:06.396857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.801 ms
00:25:37.394  [2024-12-09 16:36:06.396867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.394  [2024-12-09 16:36:06.435861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.394  [2024-12-09 16:36:06.435921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:25:37.394  [2024-12-09 16:36:06.435935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.991 ms
00:25:37.394  [2024-12-09 16:36:06.435950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.394  [2024-12-09 16:36:06.436025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.394  [2024-12-09 16:36:06.436036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:25:37.394  [2024-12-09 16:36:06.436047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.050 ms
00:25:37.394  [2024-12-09 16:36:06.436057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.394  [2024-12-09 16:36:06.505512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.394  [2024-12-09 16:36:06.505556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:25:37.394  [2024-12-09 16:36:06.505572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 69.513 ms
00:25:37.394  [2024-12-09 16:36:06.505582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.394  [2024-12-09 16:36:06.505626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.394  [2024-12-09 16:36:06.505638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:25:37.394  [2024-12-09 16:36:06.505654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:25:37.394  [2024-12-09 16:36:06.505664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.394  [2024-12-09 16:36:06.506173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.394  [2024-12-09 16:36:06.506195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:25:37.394  [2024-12-09 16:36:06.506206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.437 ms
00:25:37.394  [2024-12-09 16:36:06.506217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.394  [2024-12-09 16:36:06.506335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.394  [2024-12-09 16:36:06.506348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:25:37.394  [2024-12-09 16:36:06.506364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.096 ms
00:25:37.394  [2024-12-09 16:36:06.506374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.394  [2024-12-09 16:36:06.524891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.394  [2024-12-09 16:36:06.524956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:25:37.394  [2024-12-09 16:36:06.524969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.527 ms
00:25:37.394  [2024-12-09 16:36:06.524979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.394  [2024-12-09 16:36:06.543571] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:25:37.394  [2024-12-09 16:36:06.543620] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:25:37.394  [2024-12-09 16:36:06.543634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.394  [2024-12-09 16:36:06.543644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:25:37.394  [2024-12-09 16:36:06.543670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.556 ms
00:25:37.394  [2024-12-09 16:36:06.543680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.572241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.653  [2024-12-09 16:36:06.572282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:25:37.653  [2024-12-09 16:36:06.572311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.563 ms
00:25:37.653  [2024-12-09 16:36:06.572321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.590279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.653  [2024-12-09 16:36:06.590318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:25:37.653  [2024-12-09 16:36:06.590346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.933 ms
00:25:37.653  [2024-12-09 16:36:06.590355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.608223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.653  [2024-12-09 16:36:06.608261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:25:37.653  [2024-12-09 16:36:06.608274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.860 ms
00:25:37.653  [2024-12-09 16:36:06.608283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.609104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.653  [2024-12-09 16:36:06.609136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:25:37.653  [2024-12-09 16:36:06.609152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.712 ms
00:25:37.653  [2024-12-09 16:36:06.609162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.691990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.653  [2024-12-09 16:36:06.692051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:25:37.653  [2024-12-09 16:36:06.692090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 82.939 ms
00:25:37.653  [2024-12-09 16:36:06.692100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.702311] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:25:37.653  [2024-12-09 16:36:06.704805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.653  [2024-12-09 16:36:06.704833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:25:37.653  [2024-12-09 16:36:06.704845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.675 ms
00:25:37.653  [2024-12-09 16:36:06.704855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.704962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.653  [2024-12-09 16:36:06.704976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:25:37.653  [2024-12-09 16:36:06.704991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:25:37.653  [2024-12-09 16:36:06.705001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.705079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.653  [2024-12-09 16:36:06.705091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:25:37.653  [2024-12-09 16:36:06.705102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.029 ms
00:25:37.653  [2024-12-09 16:36:06.705111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.705131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.653  [2024-12-09 16:36:06.705141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:25:37.653  [2024-12-09 16:36:06.705151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:25:37.653  [2024-12-09 16:36:06.705161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.705213] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:25:37.653  [2024-12-09 16:36:06.705225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.653  [2024-12-09 16:36:06.705235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:25:37.653  [2024-12-09 16:36:06.705245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:25:37.653  [2024-12-09 16:36:06.705255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.740670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.653  [2024-12-09 16:36:06.740706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:25:37.653  [2024-12-09 16:36:06.740725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.453 ms
00:25:37.653  [2024-12-09 16:36:06.740735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.653  [2024-12-09 16:36:06.740819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:37.654  [2024-12-09 16:36:06.740843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:25:37.654  [2024-12-09 16:36:06.740854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:25:37.654  [2024-12-09 16:36:06.740864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:37.654  [2024-12-09 16:36:06.742011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 379.757 ms, result 0
00:25:39.032  
[2024-12-09T16:36:09.150Z] Copying: 25/1024 [MB] (25 MBps)
[2024-12-09T16:36:10.089Z] Copying: 50/1024 [MB] (25 MBps)
[2024-12-09T16:36:11.028Z] Copying: 75/1024 [MB] (24 MBps)
[2024-12-09T16:36:11.968Z] Copying: 101/1024 [MB] (25 MBps)
[2024-12-09T16:36:13.348Z] Copying: 126/1024 [MB] (25 MBps)
[2024-12-09T16:36:14.287Z] Copying: 151/1024 [MB] (24 MBps)
[2024-12-09T16:36:15.226Z] Copying: 176/1024 [MB] (25 MBps)
[2024-12-09T16:36:16.163Z] Copying: 201/1024 [MB] (25 MBps)
[2024-12-09T16:36:17.101Z] Copying: 225/1024 [MB] (23 MBps)
[2024-12-09T16:36:18.042Z] Copying: 249/1024 [MB] (24 MBps)
[2024-12-09T16:36:18.981Z] Copying: 275/1024 [MB] (25 MBps)
[2024-12-09T16:36:20.361Z] Copying: 300/1024 [MB] (25 MBps)
[2024-12-09T16:36:21.300Z] Copying: 325/1024 [MB] (25 MBps)
[2024-12-09T16:36:22.239Z] Copying: 350/1024 [MB] (25 MBps)
[2024-12-09T16:36:23.179Z] Copying: 376/1024 [MB] (25 MBps)
[2024-12-09T16:36:24.117Z] Copying: 401/1024 [MB] (25 MBps)
[2024-12-09T16:36:25.056Z] Copying: 426/1024 [MB] (25 MBps)
[2024-12-09T16:36:25.992Z] Copying: 451/1024 [MB] (24 MBps)
[2024-12-09T16:36:26.929Z] Copying: 475/1024 [MB] (23 MBps)
[2024-12-09T16:36:28.310Z] Copying: 499/1024 [MB] (24 MBps)
[2024-12-09T16:36:29.247Z] Copying: 524/1024 [MB] (24 MBps)
[2024-12-09T16:36:30.225Z] Copying: 549/1024 [MB] (25 MBps)
[2024-12-09T16:36:31.197Z] Copying: 575/1024 [MB] (25 MBps)
[2024-12-09T16:36:32.136Z] Copying: 600/1024 [MB] (25 MBps)
[2024-12-09T16:36:33.074Z] Copying: 625/1024 [MB] (25 MBps)
[2024-12-09T16:36:34.012Z] Copying: 651/1024 [MB] (25 MBps)
[2024-12-09T16:36:34.950Z] Copying: 676/1024 [MB] (25 MBps)
[2024-12-09T16:36:36.337Z] Copying: 702/1024 [MB] (25 MBps)
[2024-12-09T16:36:36.906Z] Copying: 728/1024 [MB] (25 MBps)
[2024-12-09T16:36:38.286Z] Copying: 754/1024 [MB] (25 MBps)
[2024-12-09T16:36:39.223Z] Copying: 779/1024 [MB] (25 MBps)
[2024-12-09T16:36:40.162Z] Copying: 804/1024 [MB] (25 MBps)
[2024-12-09T16:36:41.100Z] Copying: 830/1024 [MB] (25 MBps)
[2024-12-09T16:36:42.038Z] Copying: 856/1024 [MB] (25 MBps)
[2024-12-09T16:36:42.976Z] Copying: 882/1024 [MB] (25 MBps)
[2024-12-09T16:36:43.913Z] Copying: 907/1024 [MB] (24 MBps)
[2024-12-09T16:36:45.292Z] Copying: 931/1024 [MB] (23 MBps)
[2024-12-09T16:36:46.231Z] Copying: 955/1024 [MB] (24 MBps)
[2024-12-09T16:36:47.170Z] Copying: 979/1024 [MB] (23 MBps)
[2024-12-09T16:36:48.109Z] Copying: 1002/1024 [MB] (23 MBps)
[2024-12-09T16:36:48.109Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-09 16:36:47.878429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.930  [2024-12-09 16:36:47.878537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:26:18.930  [2024-12-09 16:36:47.878572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:26:18.930  [2024-12-09 16:36:47.878595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.930  [2024-12-09 16:36:47.878642] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:26:18.931  [2024-12-09 16:36:47.887523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.931  [2024-12-09 16:36:47.887583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:26:18.931  [2024-12-09 16:36:47.887601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.857 ms
00:26:18.931  [2024-12-09 16:36:47.887616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.931  [2024-12-09 16:36:47.887922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.931  [2024-12-09 16:36:47.887942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:26:18.931  [2024-12-09 16:36:47.887957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.264 ms
00:26:18.931  [2024-12-09 16:36:47.887972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.931  [2024-12-09 16:36:47.892043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.931  [2024-12-09 16:36:47.892082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:26:18.931  [2024-12-09 16:36:47.892100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.055 ms
00:26:18.931  [2024-12-09 16:36:47.892122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.931  [2024-12-09 16:36:47.898104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.931  [2024-12-09 16:36:47.898139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:26:18.931  [2024-12-09 16:36:47.898151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.961 ms
00:26:18.931  [2024-12-09 16:36:47.898161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.931  [2024-12-09 16:36:47.933690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.931  [2024-12-09 16:36:47.933730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:26:18.931  [2024-12-09 16:36:47.933744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.516 ms
00:26:18.931  [2024-12-09 16:36:47.933753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.931  [2024-12-09 16:36:47.954042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.931  [2024-12-09 16:36:47.954075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:26:18.931  [2024-12-09 16:36:47.954104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.266 ms
00:26:18.931  [2024-12-09 16:36:47.954115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.931  [2024-12-09 16:36:47.954245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.931  [2024-12-09 16:36:47.954258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:26:18.931  [2024-12-09 16:36:47.954269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.086 ms
00:26:18.931  [2024-12-09 16:36:47.954278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.931  [2024-12-09 16:36:47.988361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.931  [2024-12-09 16:36:47.988398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:26:18.931  [2024-12-09 16:36:47.988410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.122 ms
00:26:18.931  [2024-12-09 16:36:47.988419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.931  [2024-12-09 16:36:48.021904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.931  [2024-12-09 16:36:48.021938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:26:18.931  [2024-12-09 16:36:48.021950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.489 ms
00:26:18.931  [2024-12-09 16:36:48.021975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.931  [2024-12-09 16:36:48.054931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.931  [2024-12-09 16:36:48.054963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:26:18.931  [2024-12-09 16:36:48.054975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.973 ms
00:26:18.931  [2024-12-09 16:36:48.054984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.931  [2024-12-09 16:36:48.089198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.931  [2024-12-09 16:36:48.089239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:26:18.931  [2024-12-09 16:36:48.089255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.178 ms
00:26:18.931  [2024-12-09 16:36:48.089267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:18.931  [2024-12-09 16:36:48.089308] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:26:18.931  [2024-12-09 16:36:48.089351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.089999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.090009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.090020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.090030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.090040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.090050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.090060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.090070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.090080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.090090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.090101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.931  [2024-12-09 16:36:48.090112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:26:18.932  [2024-12-09 16:36:48.090817] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:26:18.932  [2024-12-09 16:36:48.090834] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         3fd7dc7f-5371-4379-8893-54820b2eff53
00:26:18.932  [2024-12-09 16:36:48.090852] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:26:18.932  [2024-12-09 16:36:48.090865] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:26:18.932  [2024-12-09 16:36:48.090877] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:26:18.932  [2024-12-09 16:36:48.090890] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:26:18.932  [2024-12-09 16:36:48.090944] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:26:18.932  [2024-12-09 16:36:48.090962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:26:18.932  [2024-12-09 16:36:48.090979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:26:18.932  [2024-12-09 16:36:48.090995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:26:18.932  [2024-12-09 16:36:48.091008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:26:18.932  [2024-12-09 16:36:48.091021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:18.932  [2024-12-09 16:36:48.091034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:26:18.932  [2024-12-09 16:36:48.091047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.716 ms
00:26:18.932  [2024-12-09 16:36:48.091064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.192  [2024-12-09 16:36:48.111131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:19.192  [2024-12-09 16:36:48.111166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:26:19.192  [2024-12-09 16:36:48.111178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.044 ms
00:26:19.192  [2024-12-09 16:36:48.111188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.192  [2024-12-09 16:36:48.111719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:19.192  [2024-12-09 16:36:48.111730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:26:19.192  [2024-12-09 16:36:48.111746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.511 ms
00:26:19.192  [2024-12-09 16:36:48.111756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.192  [2024-12-09 16:36:48.160566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.192  [2024-12-09 16:36:48.160600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:26:19.192  [2024-12-09 16:36:48.160614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.192  [2024-12-09 16:36:48.160624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.192  [2024-12-09 16:36:48.160690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.192  [2024-12-09 16:36:48.160702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:26:19.192  [2024-12-09 16:36:48.160716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.192  [2024-12-09 16:36:48.160726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.192  [2024-12-09 16:36:48.160785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.192  [2024-12-09 16:36:48.160797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:26:19.192  [2024-12-09 16:36:48.160807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.192  [2024-12-09 16:36:48.160816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.192  [2024-12-09 16:36:48.160833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.192  [2024-12-09 16:36:48.160843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:26:19.192  [2024-12-09 16:36:48.160852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.192  [2024-12-09 16:36:48.160866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.192  [2024-12-09 16:36:48.281391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.192  [2024-12-09 16:36:48.281574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:26:19.192  [2024-12-09 16:36:48.281668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.192  [2024-12-09 16:36:48.281704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.452  [2024-12-09 16:36:48.376829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.452  [2024-12-09 16:36:48.376889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:26:19.452  [2024-12-09 16:36:48.376936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.452  [2024-12-09 16:36:48.376953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.452  [2024-12-09 16:36:48.377063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.452  [2024-12-09 16:36:48.377075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:26:19.452  [2024-12-09 16:36:48.377087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.452  [2024-12-09 16:36:48.377097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.452  [2024-12-09 16:36:48.377132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.452  [2024-12-09 16:36:48.377144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:26:19.452  [2024-12-09 16:36:48.377153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.452  [2024-12-09 16:36:48.377163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.452  [2024-12-09 16:36:48.377280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.452  [2024-12-09 16:36:48.377293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:26:19.452  [2024-12-09 16:36:48.377303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.452  [2024-12-09 16:36:48.377314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.452  [2024-12-09 16:36:48.377363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.452  [2024-12-09 16:36:48.377376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:26:19.452  [2024-12-09 16:36:48.377385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.452  [2024-12-09 16:36:48.377395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.452  [2024-12-09 16:36:48.377434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.452  [2024-12-09 16:36:48.377446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:26:19.452  [2024-12-09 16:36:48.377456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.452  [2024-12-09 16:36:48.377466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.452  [2024-12-09 16:36:48.377505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:19.452  [2024-12-09 16:36:48.377516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:26:19.452  [2024-12-09 16:36:48.377526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:19.452  [2024-12-09 16:36:48.377536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:19.452  [2024-12-09 16:36:48.377653] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.028 ms, result 0
00:26:20.390  
00:26:20.390  
00:26:20.390   16:36:49 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:26:22.297  /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK
00:26:22.297   16:36:51 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072
00:26:22.297  [2024-12-09 16:36:51.096093] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:26:22.297  [2024-12-09 16:36:51.096215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81295 ]
00:26:22.297  [2024-12-09 16:36:51.276376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:22.297  [2024-12-09 16:36:51.383758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:26:22.557  [2024-12-09 16:36:51.731283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:26:22.557  [2024-12-09 16:36:51.731589] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:26:22.817  [2024-12-09 16:36:51.891591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.817  [2024-12-09 16:36:51.891644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:26:22.817  [2024-12-09 16:36:51.891658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:26:22.817  [2024-12-09 16:36:51.891684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.817  [2024-12-09 16:36:51.891730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.817  [2024-12-09 16:36:51.891745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:26:22.817  [2024-12-09 16:36:51.891755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.026 ms
00:26:22.817  [2024-12-09 16:36:51.891765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.817  [2024-12-09 16:36:51.891786] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:26:22.817  [2024-12-09 16:36:51.892704] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:26:22.817  [2024-12-09 16:36:51.892733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.817  [2024-12-09 16:36:51.892744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:26:22.817  [2024-12-09 16:36:51.892755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.953 ms
00:26:22.817  [2024-12-09 16:36:51.892766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.817  [2024-12-09 16:36:51.894243] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:26:22.817  [2024-12-09 16:36:51.912615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.817  [2024-12-09 16:36:51.912652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:26:22.817  [2024-12-09 16:36:51.912666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.403 ms
00:26:22.817  [2024-12-09 16:36:51.912692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.817  [2024-12-09 16:36:51.912757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.817  [2024-12-09 16:36:51.912770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:26:22.817  [2024-12-09 16:36:51.912781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.022 ms
00:26:22.817  [2024-12-09 16:36:51.912791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.817  [2024-12-09 16:36:51.919575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.817  [2024-12-09 16:36:51.919605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:26:22.817  [2024-12-09 16:36:51.919617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.726 ms
00:26:22.817  [2024-12-09 16:36:51.919630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.817  [2024-12-09 16:36:51.919703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.817  [2024-12-09 16:36:51.919715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:26:22.817  [2024-12-09 16:36:51.919725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.054 ms
00:26:22.817  [2024-12-09 16:36:51.919734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.817  [2024-12-09 16:36:51.919774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.817  [2024-12-09 16:36:51.919785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:26:22.817  [2024-12-09 16:36:51.919795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:26:22.818  [2024-12-09 16:36:51.919804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.818  [2024-12-09 16:36:51.919829] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:26:22.818  [2024-12-09 16:36:51.924441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.818  [2024-12-09 16:36:51.924491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:26:22.818  [2024-12-09 16:36:51.924507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.624 ms
00:26:22.818  [2024-12-09 16:36:51.924517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.818  [2024-12-09 16:36:51.924549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.818  [2024-12-09 16:36:51.924559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:26:22.818  [2024-12-09 16:36:51.924569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:26:22.818  [2024-12-09 16:36:51.924578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.818  [2024-12-09 16:36:51.924629] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:26:22.818  [2024-12-09 16:36:51.924652] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:26:22.818  [2024-12-09 16:36:51.924684] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:26:22.818  [2024-12-09 16:36:51.924704] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:26:22.818  [2024-12-09 16:36:51.924795] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:26:22.818  [2024-12-09 16:36:51.924808] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:26:22.818  [2024-12-09 16:36:51.924820] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:26:22.818  [2024-12-09 16:36:51.924832] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:26:22.818  [2024-12-09 16:36:51.924844] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:26:22.818  [2024-12-09 16:36:51.924854] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:26:22.818  [2024-12-09 16:36:51.924864] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:26:22.818  [2024-12-09 16:36:51.924876] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:26:22.818  [2024-12-09 16:36:51.924885] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:26:22.818  [2024-12-09 16:36:51.924913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.818  [2024-12-09 16:36:51.924932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:26:22.818  [2024-12-09 16:36:51.924948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.287 ms
00:26:22.818  [2024-12-09 16:36:51.924979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.818  [2024-12-09 16:36:51.925060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.818  [2024-12-09 16:36:51.925071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:26:22.818  [2024-12-09 16:36:51.925081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.062 ms
00:26:22.818  [2024-12-09 16:36:51.925091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.818  [2024-12-09 16:36:51.925187] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:26:22.818  [2024-12-09 16:36:51.925201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:26:22.818  [2024-12-09 16:36:51.925212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:26:22.818  [2024-12-09 16:36:51.925222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:26:22.818  [2024-12-09 16:36:51.925241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:26:22.818  [2024-12-09 16:36:51.925259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:26:22.818  [2024-12-09 16:36:51.925268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:26:22.818  [2024-12-09 16:36:51.925288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:26:22.818  [2024-12-09 16:36:51.925297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:26:22.818  [2024-12-09 16:36:51.925306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:26:22.818  [2024-12-09 16:36:51.925323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:26:22.818  [2024-12-09 16:36:51.925333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:26:22.818  [2024-12-09 16:36:51.925342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:26:22.818  [2024-12-09 16:36:51.925376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:26:22.818  [2024-12-09 16:36:51.925385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:26:22.818  [2024-12-09 16:36:51.925403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:22.818  [2024-12-09 16:36:51.925422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:26:22.818  [2024-12-09 16:36:51.925431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:22.818  [2024-12-09 16:36:51.925450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:26:22.818  [2024-12-09 16:36:51.925459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:22.818  [2024-12-09 16:36:51.925476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:26:22.818  [2024-12-09 16:36:51.925485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:22.818  [2024-12-09 16:36:51.925503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:26:22.818  [2024-12-09 16:36:51.925512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:26:22.818  [2024-12-09 16:36:51.925530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:26:22.818  [2024-12-09 16:36:51.925539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:26:22.818  [2024-12-09 16:36:51.925548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:26:22.818  [2024-12-09 16:36:51.925557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:26:22.818  [2024-12-09 16:36:51.925566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:26:22.818  [2024-12-09 16:36:51.925575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:26:22.818  [2024-12-09 16:36:51.925593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:26:22.818  [2024-12-09 16:36:51.925602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925612] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:26:22.818  [2024-12-09 16:36:51.925622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:26:22.818  [2024-12-09 16:36:51.925631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:26:22.818  [2024-12-09 16:36:51.925641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:22.818  [2024-12-09 16:36:51.925651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:26:22.818  [2024-12-09 16:36:51.925660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:26:22.818  [2024-12-09 16:36:51.925669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:26:22.818  [2024-12-09 16:36:51.925678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:26:22.818  [2024-12-09 16:36:51.925686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:26:22.818  [2024-12-09 16:36:51.925696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:26:22.818  [2024-12-09 16:36:51.925707] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:26:22.818  [2024-12-09 16:36:51.925719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:26:22.818  [2024-12-09 16:36:51.925734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:26:22.818  [2024-12-09 16:36:51.925745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:26:22.818  [2024-12-09 16:36:51.925756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:26:22.818  [2024-12-09 16:36:51.925766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:26:22.818  [2024-12-09 16:36:51.925776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:26:22.818  [2024-12-09 16:36:51.925786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:26:22.818  [2024-12-09 16:36:51.925797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:26:22.818  [2024-12-09 16:36:51.925807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:26:22.818  [2024-12-09 16:36:51.925817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:26:22.818  [2024-12-09 16:36:51.925827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:26:22.818  [2024-12-09 16:36:51.925837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:26:22.818  [2024-12-09 16:36:51.925847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:26:22.818  [2024-12-09 16:36:51.925857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:26:22.818  [2024-12-09 16:36:51.925868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:26:22.818  [2024-12-09 16:36:51.925878] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:26:22.818  [2024-12-09 16:36:51.925889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:26:22.819  [2024-12-09 16:36:51.925900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:26:22.819  [2024-12-09 16:36:51.925922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:26:22.819  [2024-12-09 16:36:51.925933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:26:22.819  [2024-12-09 16:36:51.925944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:26:22.819  [2024-12-09 16:36:51.925955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.819  [2024-12-09 16:36:51.925966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:26:22.819  [2024-12-09 16:36:51.925976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.823 ms
00:26:22.819  [2024-12-09 16:36:51.925985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.819  [2024-12-09 16:36:51.961734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.819  [2024-12-09 16:36:51.961770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:26:22.819  [2024-12-09 16:36:51.961783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.759 ms
00:26:22.819  [2024-12-09 16:36:51.961797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:22.819  [2024-12-09 16:36:51.961866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:22.819  [2024-12-09 16:36:51.961876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:26:22.819  [2024-12-09 16:36:51.961887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:26:22.819  [2024-12-09 16:36:51.961912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.032751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.032790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:26:23.079  [2024-12-09 16:36:52.032804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 70.869 ms
00:26:23.079  [2024-12-09 16:36:52.032814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.032852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.032863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:26:23.079  [2024-12-09 16:36:52.032877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:26:23.079  [2024-12-09 16:36:52.032887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.033436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.033458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:26:23.079  [2024-12-09 16:36:52.033470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.439 ms
00:26:23.079  [2024-12-09 16:36:52.033480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.033596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.033609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:26:23.079  [2024-12-09 16:36:52.033626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.094 ms
00:26:23.079  [2024-12-09 16:36:52.033635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.050299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.050336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:26:23.079  [2024-12-09 16:36:52.050349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.670 ms
00:26:23.079  [2024-12-09 16:36:52.050359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.068465] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:26:23.079  [2024-12-09 16:36:52.068503] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:26:23.079  [2024-12-09 16:36:52.068518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.068528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:26:23.079  [2024-12-09 16:36:52.068538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.089 ms
00:26:23.079  [2024-12-09 16:36:52.068547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.096339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.096376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:26:23.079  [2024-12-09 16:36:52.096389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.794 ms
00:26:23.079  [2024-12-09 16:36:52.096399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.113788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.113823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:26:23.079  [2024-12-09 16:36:52.113835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.366 ms
00:26:23.079  [2024-12-09 16:36:52.113844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.130992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.131025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:26:23.079  [2024-12-09 16:36:52.131037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.138 ms
00:26:23.079  [2024-12-09 16:36:52.131062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.131815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.131848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:26:23.079  [2024-12-09 16:36:52.131864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.651 ms
00:26:23.079  [2024-12-09 16:36:52.131875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.212148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.212203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:26:23.079  [2024-12-09 16:36:52.212226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 80.382 ms
00:26:23.079  [2024-12-09 16:36:52.212237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.223113] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:26:23.079  [2024-12-09 16:36:52.225753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.225789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:26:23.079  [2024-12-09 16:36:52.225806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.490 ms
00:26:23.079  [2024-12-09 16:36:52.225819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.225923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.225945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:26:23.079  [2024-12-09 16:36:52.225965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:26:23.079  [2024-12-09 16:36:52.225975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.226053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.226065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:26:23.079  [2024-12-09 16:36:52.226076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.027 ms
00:26:23.079  [2024-12-09 16:36:52.226086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.226111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.226122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:26:23.079  [2024-12-09 16:36:52.226133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:26:23.079  [2024-12-09 16:36:52.226143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.079  [2024-12-09 16:36:52.226177] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:26:23.079  [2024-12-09 16:36:52.226189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.079  [2024-12-09 16:36:52.226199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:26:23.079  [2024-12-09 16:36:52.226209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:26:23.079  [2024-12-09 16:36:52.226219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.339  [2024-12-09 16:36:52.261038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.339  [2024-12-09 16:36:52.261075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:26:23.339  [2024-12-09 16:36:52.261095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.851 ms
00:26:23.339  [2024-12-09 16:36:52.261122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.339  [2024-12-09 16:36:52.261199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:23.339  [2024-12-09 16:36:52.261210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:26:23.339  [2024-12-09 16:36:52.261221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.042 ms
00:26:23.339  [2024-12-09 16:36:52.261231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:23.339  [2024-12-09 16:36:52.262305] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 370.843 ms, result 0
00:26:24.277  
[2024-12-09T16:36:54.397Z] Copying: 23/1024 [MB] (23 MBps)
[2024-12-09T16:36:55.335Z] Copying: 46/1024 [MB] (23 MBps)
[2024-12-09T16:36:56.272Z] Copying: 69/1024 [MB] (22 MBps)
[2024-12-09T16:36:57.652Z] Copying: 92/1024 [MB] (22 MBps)
[2024-12-09T16:36:58.590Z] Copying: 114/1024 [MB] (22 MBps)
[2024-12-09T16:36:59.529Z] Copying: 137/1024 [MB] (22 MBps)
[2024-12-09T16:37:00.467Z] Copying: 159/1024 [MB] (22 MBps)
[2024-12-09T16:37:01.439Z] Copying: 181/1024 [MB] (21 MBps)
[2024-12-09T16:37:02.392Z] Copying: 203/1024 [MB] (21 MBps)
[2024-12-09T16:37:03.329Z] Copying: 225/1024 [MB] (22 MBps)
[2024-12-09T16:37:04.267Z] Copying: 247/1024 [MB] (22 MBps)
[2024-12-09T16:37:05.647Z] Copying: 269/1024 [MB] (21 MBps)
[2024-12-09T16:37:06.584Z] Copying: 291/1024 [MB] (21 MBps)
[2024-12-09T16:37:07.521Z] Copying: 313/1024 [MB] (21 MBps)
[2024-12-09T16:37:08.459Z] Copying: 335/1024 [MB] (22 MBps)
[2024-12-09T16:37:09.398Z] Copying: 358/1024 [MB] (22 MBps)
[2024-12-09T16:37:10.335Z] Copying: 380/1024 [MB] (22 MBps)
[2024-12-09T16:37:11.273Z] Copying: 402/1024 [MB] (22 MBps)
[2024-12-09T16:37:12.653Z] Copying: 425/1024 [MB] (22 MBps)
[2024-12-09T16:37:13.591Z] Copying: 447/1024 [MB] (22 MBps)
[2024-12-09T16:37:14.528Z] Copying: 469/1024 [MB] (22 MBps)
[2024-12-09T16:37:15.466Z] Copying: 492/1024 [MB] (22 MBps)
[2024-12-09T16:37:16.404Z] Copying: 514/1024 [MB] (22 MBps)
[2024-12-09T16:37:17.342Z] Copying: 537/1024 [MB] (22 MBps)
[2024-12-09T16:37:18.281Z] Copying: 560/1024 [MB] (22 MBps)
[2024-12-09T16:37:19.660Z] Copying: 583/1024 [MB] (22 MBps)
[2024-12-09T16:37:20.598Z] Copying: 605/1024 [MB] (22 MBps)
[2024-12-09T16:37:21.537Z] Copying: 627/1024 [MB] (21 MBps)
[2024-12-09T16:37:22.475Z] Copying: 649/1024 [MB] (22 MBps)
[2024-12-09T16:37:23.412Z] Copying: 672/1024 [MB] (22 MBps)
[2024-12-09T16:37:24.350Z] Copying: 694/1024 [MB] (22 MBps)
[2024-12-09T16:37:25.294Z] Copying: 717/1024 [MB] (22 MBps)
[2024-12-09T16:37:26.234Z] Copying: 739/1024 [MB] (22 MBps)
[2024-12-09T16:37:27.614Z] Copying: 762/1024 [MB] (22 MBps)
[2024-12-09T16:37:28.552Z] Copying: 783/1024 [MB] (21 MBps)
[2024-12-09T16:37:29.490Z] Copying: 805/1024 [MB] (21 MBps)
[2024-12-09T16:37:30.428Z] Copying: 827/1024 [MB] (21 MBps)
[2024-12-09T16:37:31.366Z] Copying: 848/1024 [MB] (21 MBps)
[2024-12-09T16:37:32.358Z] Copying: 870/1024 [MB] (21 MBps)
[2024-12-09T16:37:33.313Z] Copying: 893/1024 [MB] (22 MBps)
[2024-12-09T16:37:34.250Z] Copying: 916/1024 [MB] (23 MBps)
[2024-12-09T16:37:35.630Z] Copying: 938/1024 [MB] (21 MBps)
[2024-12-09T16:37:36.568Z] Copying: 960/1024 [MB] (22 MBps)
[2024-12-09T16:37:37.505Z] Copying: 983/1024 [MB] (23 MBps)
[2024-12-09T16:37:38.443Z] Copying: 1005/1024 [MB] (22 MBps)
[2024-12-09T16:37:38.702Z] Copying: 1023/1024 [MB] (17 MBps)
[2024-12-09T16:37:38.702Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-09 16:37:38.544240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.523  [2024-12-09 16:37:38.544405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:27:09.523  [2024-12-09 16:37:38.544518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:27:09.523  [2024-12-09 16:37:38.544535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.523  [2024-12-09 16:37:38.545812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:27:09.523  [2024-12-09 16:37:38.551482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.523  [2024-12-09 16:37:38.551642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:27:09.523  [2024-12-09 16:37:38.551663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.640 ms
00:27:09.523  [2024-12-09 16:37:38.551674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.523  [2024-12-09 16:37:38.564734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.523  [2024-12-09 16:37:38.564774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:27:09.523  [2024-12-09 16:37:38.564788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.752 ms
00:27:09.523  [2024-12-09 16:37:38.564806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.523  [2024-12-09 16:37:38.586852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.523  [2024-12-09 16:37:38.586924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:27:09.523  [2024-12-09 16:37:38.586949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.062 ms
00:27:09.523  [2024-12-09 16:37:38.586967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.523  [2024-12-09 16:37:38.591794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.523  [2024-12-09 16:37:38.591831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:27:09.523  [2024-12-09 16:37:38.591843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.782 ms
00:27:09.523  [2024-12-09 16:37:38.591859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.523  [2024-12-09 16:37:38.626044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.523  [2024-12-09 16:37:38.626209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:27:09.523  [2024-12-09 16:37:38.626229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.166 ms
00:27:09.523  [2024-12-09 16:37:38.626240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.523  [2024-12-09 16:37:38.645932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.523  [2024-12-09 16:37:38.645970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:27:09.523  [2024-12-09 16:37:38.645983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.688 ms
00:27:09.523  [2024-12-09 16:37:38.645993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.783  [2024-12-09 16:37:38.743693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.783  [2024-12-09 16:37:38.743748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:27:09.783  [2024-12-09 16:37:38.743763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 97.819 ms
00:27:09.784  [2024-12-09 16:37:38.743773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.784  [2024-12-09 16:37:38.777942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.784  [2024-12-09 16:37:38.777978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:27:09.784  [2024-12-09 16:37:38.777991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.205 ms
00:27:09.784  [2024-12-09 16:37:38.778001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.784  [2024-12-09 16:37:38.811571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.784  [2024-12-09 16:37:38.811607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:27:09.784  [2024-12-09 16:37:38.811620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.588 ms
00:27:09.784  [2024-12-09 16:37:38.811629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.784  [2024-12-09 16:37:38.845617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.784  [2024-12-09 16:37:38.845652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:27:09.784  [2024-12-09 16:37:38.845665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.005 ms
00:27:09.784  [2024-12-09 16:37:38.845675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.784  [2024-12-09 16:37:38.879970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.784  [2024-12-09 16:37:38.880005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:27:09.784  [2024-12-09 16:37:38.880018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.269 ms
00:27:09.784  [2024-12-09 16:37:38.880027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.784  [2024-12-09 16:37:38.880061] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:27:09.784  [2024-12-09 16:37:38.880076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:    79616 / 261120 	wr_cnt: 1	state: open
00:27:09.784  [2024-12-09 16:37:38.880088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.784  [2024-12-09 16:37:38.880822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.880992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.785  [2024-12-09 16:37:38.881129] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:27:09.785  [2024-12-09 16:37:38.881138] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         3fd7dc7f-5371-4379-8893-54820b2eff53
00:27:09.785  [2024-12-09 16:37:38.881149] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    79616
00:27:09.785  [2024-12-09 16:37:38.881158] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        80576
00:27:09.785  [2024-12-09 16:37:38.881166] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         79616
00:27:09.785  [2024-12-09 16:37:38.881176] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.0121
00:27:09.785  [2024-12-09 16:37:38.881198] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:27:09.785  [2024-12-09 16:37:38.881208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:27:09.785  [2024-12-09 16:37:38.881226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:27:09.785  [2024-12-09 16:37:38.881234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:27:09.785  [2024-12-09 16:37:38.881243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:27:09.785  [2024-12-09 16:37:38.881252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.785  [2024-12-09 16:37:38.881261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:27:09.785  [2024-12-09 16:37:38.881271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.193 ms
00:27:09.785  [2024-12-09 16:37:38.881280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.785  [2024-12-09 16:37:38.899874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.785  [2024-12-09 16:37:38.899923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:27:09.785  [2024-12-09 16:37:38.899950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.589 ms
00:27:09.785  [2024-12-09 16:37:38.899967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.785  [2024-12-09 16:37:38.900495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.785  [2024-12-09 16:37:38.900518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:27:09.785  [2024-12-09 16:37:38.900530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.500 ms
00:27:09.785  [2024-12-09 16:37:38.900541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.785  [2024-12-09 16:37:38.948180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:09.785  [2024-12-09 16:37:38.948213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:27:09.785  [2024-12-09 16:37:38.948226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:09.785  [2024-12-09 16:37:38.948235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.785  [2024-12-09 16:37:38.948289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:09.785  [2024-12-09 16:37:38.948300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:27:09.785  [2024-12-09 16:37:38.948310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:09.785  [2024-12-09 16:37:38.948319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.785  [2024-12-09 16:37:38.948374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:09.785  [2024-12-09 16:37:38.948392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:27:09.785  [2024-12-09 16:37:38.948401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:09.785  [2024-12-09 16:37:38.948410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.785  [2024-12-09 16:37:38.948424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:09.785  [2024-12-09 16:37:38.948435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:27:09.785  [2024-12-09 16:37:38.948444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:09.785  [2024-12-09 16:37:38.948453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.044  [2024-12-09 16:37:39.063458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.044  [2024-12-09 16:37:39.063511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:27:10.044  [2024-12-09 16:37:39.063525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.044  [2024-12-09 16:37:39.063535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.044  [2024-12-09 16:37:39.156746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.044  [2024-12-09 16:37:39.156791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:27:10.044  [2024-12-09 16:37:39.156805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.044  [2024-12-09 16:37:39.156817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.044  [2024-12-09 16:37:39.156922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.044  [2024-12-09 16:37:39.156958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:27:10.044  [2024-12-09 16:37:39.156973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.044  [2024-12-09 16:37:39.156989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.044  [2024-12-09 16:37:39.157043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.044  [2024-12-09 16:37:39.157055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:27:10.044  [2024-12-09 16:37:39.157065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.044  [2024-12-09 16:37:39.157075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.044  [2024-12-09 16:37:39.157179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.044  [2024-12-09 16:37:39.157192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:27:10.044  [2024-12-09 16:37:39.157203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.044  [2024-12-09 16:37:39.157217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.044  [2024-12-09 16:37:39.157249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.045  [2024-12-09 16:37:39.157261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:27:10.045  [2024-12-09 16:37:39.157271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.045  [2024-12-09 16:37:39.157280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.045  [2024-12-09 16:37:39.157314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.045  [2024-12-09 16:37:39.157324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:27:10.045  [2024-12-09 16:37:39.157335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.045  [2024-12-09 16:37:39.157345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.045  [2024-12-09 16:37:39.157387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.045  [2024-12-09 16:37:39.157399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:27:10.045  [2024-12-09 16:37:39.157408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.045  [2024-12-09 16:37:39.157418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.045  [2024-12-09 16:37:39.157529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 616.444 ms, result 0
00:27:11.950  
00:27:11.950  
00:27:11.950   16:37:40 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144
00:27:11.950  [2024-12-09 16:37:40.810607] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:27:11.950  [2024-12-09 16:37:40.810757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81791 ]
00:27:11.950  [2024-12-09 16:37:40.990745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:11.950  [2024-12-09 16:37:41.096976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:12.521  [2024-12-09 16:37:41.443308] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:27:12.521  [2024-12-09 16:37:41.443594] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:27:12.521  [2024-12-09 16:37:41.602982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.603029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:27:12.521  [2024-12-09 16:37:41.603044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:27:12.521  [2024-12-09 16:37:41.603054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.603099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.603113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:27:12.521  [2024-12-09 16:37:41.603123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.025 ms
00:27:12.521  [2024-12-09 16:37:41.603132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.603152] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:27:12.521  [2024-12-09 16:37:41.604117] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:27:12.521  [2024-12-09 16:37:41.604142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.604153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:27:12.521  [2024-12-09 16:37:41.604165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.995 ms
00:27:12.521  [2024-12-09 16:37:41.604176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.605610] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:27:12.521  [2024-12-09 16:37:41.624082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.624120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:27:12.521  [2024-12-09 16:37:41.624134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.502 ms
00:27:12.521  [2024-12-09 16:37:41.624144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.624207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.624219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:27:12.521  [2024-12-09 16:37:41.624229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.022 ms
00:27:12.521  [2024-12-09 16:37:41.624239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.631041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.631072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:27:12.521  [2024-12-09 16:37:41.631084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.744 ms
00:27:12.521  [2024-12-09 16:37:41.631099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.631175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.631188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:27:12.521  [2024-12-09 16:37:41.631198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.056 ms
00:27:12.521  [2024-12-09 16:37:41.631207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.631249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.631261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:27:12.521  [2024-12-09 16:37:41.631270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:27:12.521  [2024-12-09 16:37:41.631280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.631306] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:27:12.521  [2024-12-09 16:37:41.635987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.636021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:27:12.521  [2024-12-09 16:37:41.636036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.693 ms
00:27:12.521  [2024-12-09 16:37:41.636046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.636079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.636090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:27:12.521  [2024-12-09 16:37:41.636101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:27:12.521  [2024-12-09 16:37:41.636110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.636161] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:27:12.521  [2024-12-09 16:37:41.636186] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:27:12.521  [2024-12-09 16:37:41.636219] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:27:12.521  [2024-12-09 16:37:41.636238] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:27:12.521  [2024-12-09 16:37:41.636320] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:27:12.521  [2024-12-09 16:37:41.636334] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:27:12.521  [2024-12-09 16:37:41.636346] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:27:12.521  [2024-12-09 16:37:41.636359] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:27:12.521  [2024-12-09 16:37:41.636371] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:27:12.521  [2024-12-09 16:37:41.636383] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:27:12.521  [2024-12-09 16:37:41.636393] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:27:12.521  [2024-12-09 16:37:41.636405] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:27:12.521  [2024-12-09 16:37:41.636416] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:27:12.521  [2024-12-09 16:37:41.636428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.636437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:27:12.521  [2024-12-09 16:37:41.636448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.269 ms
00:27:12.521  [2024-12-09 16:37:41.636457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.636523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.521  [2024-12-09 16:37:41.636534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:27:12.521  [2024-12-09 16:37:41.636544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.050 ms
00:27:12.521  [2024-12-09 16:37:41.636553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.521  [2024-12-09 16:37:41.636640] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:27:12.521  [2024-12-09 16:37:41.636655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:27:12.521  [2024-12-09 16:37:41.636665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:27:12.521  [2024-12-09 16:37:41.636676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:12.521  [2024-12-09 16:37:41.636686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:27:12.521  [2024-12-09 16:37:41.636695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:27:12.521  [2024-12-09 16:37:41.636705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:27:12.521  [2024-12-09 16:37:41.636715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:27:12.521  [2024-12-09 16:37:41.636724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:27:12.521  [2024-12-09 16:37:41.636733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:27:12.521  [2024-12-09 16:37:41.636743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:27:12.521  [2024-12-09 16:37:41.636751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:27:12.521  [2024-12-09 16:37:41.636760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:27:12.521  [2024-12-09 16:37:41.636777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:27:12.521  [2024-12-09 16:37:41.636786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:27:12.521  [2024-12-09 16:37:41.636795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:12.521  [2024-12-09 16:37:41.636804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:27:12.521  [2024-12-09 16:37:41.636813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:27:12.521  [2024-12-09 16:37:41.636822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:12.521  [2024-12-09 16:37:41.636830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:27:12.521  [2024-12-09 16:37:41.636839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:27:12.521  [2024-12-09 16:37:41.636847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:12.521  [2024-12-09 16:37:41.636855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:27:12.521  [2024-12-09 16:37:41.636864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:27:12.521  [2024-12-09 16:37:41.636873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:12.521  [2024-12-09 16:37:41.636881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:27:12.522  [2024-12-09 16:37:41.636889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:27:12.522  [2024-12-09 16:37:41.636927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:12.522  [2024-12-09 16:37:41.636944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:27:12.522  [2024-12-09 16:37:41.636959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:27:12.522  [2024-12-09 16:37:41.636971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:12.522  [2024-12-09 16:37:41.636983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:27:12.522  [2024-12-09 16:37:41.636994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:27:12.522  [2024-12-09 16:37:41.637005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:27:12.522  [2024-12-09 16:37:41.637017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:27:12.522  [2024-12-09 16:37:41.637036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:27:12.522  [2024-12-09 16:37:41.637045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:27:12.522  [2024-12-09 16:37:41.637054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:27:12.522  [2024-12-09 16:37:41.637062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:27:12.522  [2024-12-09 16:37:41.637071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:12.522  [2024-12-09 16:37:41.637079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:27:12.522  [2024-12-09 16:37:41.637088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:27:12.522  [2024-12-09 16:37:41.637105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:12.522  [2024-12-09 16:37:41.637114] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:27:12.522  [2024-12-09 16:37:41.637124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:27:12.522  [2024-12-09 16:37:41.637133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:27:12.522  [2024-12-09 16:37:41.637142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:12.522  [2024-12-09 16:37:41.637152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:27:12.522  [2024-12-09 16:37:41.637161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:27:12.522  [2024-12-09 16:37:41.637170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:27:12.522  [2024-12-09 16:37:41.637179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:27:12.522  [2024-12-09 16:37:41.637187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:27:12.522  [2024-12-09 16:37:41.637197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:27:12.522  [2024-12-09 16:37:41.637209] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:27:12.522  [2024-12-09 16:37:41.637222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:27:12.522  [2024-12-09 16:37:41.637236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:27:12.522  [2024-12-09 16:37:41.637246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:27:12.522  [2024-12-09 16:37:41.637256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:27:12.522  [2024-12-09 16:37:41.637266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:27:12.522  [2024-12-09 16:37:41.637276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:27:12.522  [2024-12-09 16:37:41.637286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:27:12.522  [2024-12-09 16:37:41.637296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:27:12.522  [2024-12-09 16:37:41.637307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:27:12.522  [2024-12-09 16:37:41.637317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:27:12.522  [2024-12-09 16:37:41.637327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:27:12.522  [2024-12-09 16:37:41.637336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:27:12.522  [2024-12-09 16:37:41.637362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:27:12.522  [2024-12-09 16:37:41.637372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:27:12.522  [2024-12-09 16:37:41.637382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:27:12.522  [2024-12-09 16:37:41.637391] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:27:12.522  [2024-12-09 16:37:41.637403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:27:12.522  [2024-12-09 16:37:41.637415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:27:12.522  [2024-12-09 16:37:41.637425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:27:12.522  [2024-12-09 16:37:41.637434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:27:12.522  [2024-12-09 16:37:41.637444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:27:12.522  [2024-12-09 16:37:41.637455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.522  [2024-12-09 16:37:41.637466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:27:12.522  [2024-12-09 16:37:41.637475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.865 ms
00:27:12.522  [2024-12-09 16:37:41.637485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.522  [2024-12-09 16:37:41.675342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.522  [2024-12-09 16:37:41.675378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:27:12.522  [2024-12-09 16:37:41.675392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 37.863 ms
00:27:12.522  [2024-12-09 16:37:41.675406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.522  [2024-12-09 16:37:41.675476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.522  [2024-12-09 16:37:41.675487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:27:12.522  [2024-12-09 16:37:41.675498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:27:12.522  [2024-12-09 16:37:41.675507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.746894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.746937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:27:12.782  [2024-12-09 16:37:41.746950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 71.448 ms
00:27:12.782  [2024-12-09 16:37:41.746961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.746999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.747011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:27:12.782  [2024-12-09 16:37:41.747025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:27:12.782  [2024-12-09 16:37:41.747049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.747595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.747616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:27:12.782  [2024-12-09 16:37:41.747627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.478 ms
00:27:12.782  [2024-12-09 16:37:41.747638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.747754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.747768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:27:12.782  [2024-12-09 16:37:41.747785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.094 ms
00:27:12.782  [2024-12-09 16:37:41.747795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.767025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.767061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:27:12.782  [2024-12-09 16:37:41.767075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.239 ms
00:27:12.782  [2024-12-09 16:37:41.767085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.785915] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0
00:27:12.782  [2024-12-09 16:37:41.785959] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:27:12.782  [2024-12-09 16:37:41.785975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.785987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:27:12.782  [2024-12-09 16:37:41.785999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.824 ms
00:27:12.782  [2024-12-09 16:37:41.786009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.814476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.814516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:27:12.782  [2024-12-09 16:37:41.814530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.470 ms
00:27:12.782  [2024-12-09 16:37:41.814540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.831584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.831622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:27:12.782  [2024-12-09 16:37:41.831635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.029 ms
00:27:12.782  [2024-12-09 16:37:41.831646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.848314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.848349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:27:12.782  [2024-12-09 16:37:41.848362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.657 ms
00:27:12.782  [2024-12-09 16:37:41.848371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.849132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.849166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:27:12.782  [2024-12-09 16:37:41.849183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.657 ms
00:27:12.782  [2024-12-09 16:37:41.849193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.929091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.929152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:27:12.782  [2024-12-09 16:37:41.929173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 80.001 ms
00:27:12.782  [2024-12-09 16:37:41.929184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.939237] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:27:12.782  [2024-12-09 16:37:41.941412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.941444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:27:12.782  [2024-12-09 16:37:41.941457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.204 ms
00:27:12.782  [2024-12-09 16:37:41.941468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.941540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.941554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:27:12.782  [2024-12-09 16:37:41.941568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:27:12.782  [2024-12-09 16:37:41.941578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.942896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.942956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:27:12.782  [2024-12-09 16:37:41.942969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.279 ms
00:27:12.782  [2024-12-09 16:37:41.942980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.782  [2024-12-09 16:37:41.943007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.782  [2024-12-09 16:37:41.943020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:27:12.783  [2024-12-09 16:37:41.943030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:27:12.783  [2024-12-09 16:37:41.943040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:12.783  [2024-12-09 16:37:41.943083] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:27:12.783  [2024-12-09 16:37:41.943096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:12.783  [2024-12-09 16:37:41.943108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:27:12.783  [2024-12-09 16:37:41.943119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.014 ms
00:27:12.783  [2024-12-09 16:37:41.943129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.042  [2024-12-09 16:37:41.977578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:13.042  [2024-12-09 16:37:41.977616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:27:13.042  [2024-12-09 16:37:41.977636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.485 ms
00:27:13.042  [2024-12-09 16:37:41.977646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.042  [2024-12-09 16:37:41.977711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:13.042  [2024-12-09 16:37:41.977723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:27:13.042  [2024-12-09 16:37:41.977733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.029 ms
00:27:13.042  [2024-12-09 16:37:41.977743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:13.042  [2024-12-09 16:37:41.978929] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.091 ms, result 0
00:27:14.421  
[2024-12-09T16:37:44.536Z] Copying: 16/1024 [MB] (16 MBps)
[2024-12-09T16:37:45.474Z] Copying: 39/1024 [MB] (23 MBps)
[2024-12-09T16:37:46.411Z] Copying: 64/1024 [MB] (24 MBps)
[2024-12-09T16:37:47.348Z] Copying: 88/1024 [MB] (24 MBps)
[2024-12-09T16:37:48.289Z] Copying: 113/1024 [MB] (24 MBps)
[2024-12-09T16:37:49.225Z] Copying: 137/1024 [MB] (24 MBps)
[2024-12-09T16:37:50.605Z] Copying: 161/1024 [MB] (23 MBps)
[2024-12-09T16:37:51.543Z] Copying: 186/1024 [MB] (25 MBps)
[2024-12-09T16:37:52.481Z] Copying: 211/1024 [MB] (24 MBps)
[2024-12-09T16:37:53.419Z] Copying: 235/1024 [MB] (23 MBps)
[2024-12-09T16:37:54.357Z] Copying: 259/1024 [MB] (23 MBps)
[2024-12-09T16:37:55.292Z] Copying: 282/1024 [MB] (23 MBps)
[2024-12-09T16:37:56.229Z] Copying: 306/1024 [MB] (23 MBps)
[2024-12-09T16:37:57.608Z] Copying: 331/1024 [MB] (24 MBps)
[2024-12-09T16:37:58.177Z] Copying: 355/1024 [MB] (24 MBps)
[2024-12-09T16:37:59.556Z] Copying: 379/1024 [MB] (24 MBps)
[2024-12-09T16:38:00.493Z] Copying: 403/1024 [MB] (23 MBps)
[2024-12-09T16:38:01.434Z] Copying: 428/1024 [MB] (24 MBps)
[2024-12-09T16:38:02.371Z] Copying: 452/1024 [MB] (24 MBps)
[2024-12-09T16:38:03.309Z] Copying: 476/1024 [MB] (23 MBps)
[2024-12-09T16:38:04.319Z] Copying: 499/1024 [MB] (22 MBps)
[2024-12-09T16:38:05.261Z] Copying: 522/1024 [MB] (23 MBps)
[2024-12-09T16:38:06.197Z] Copying: 547/1024 [MB] (24 MBps)
[2024-12-09T16:38:07.577Z] Copying: 570/1024 [MB] (23 MBps)
[2024-12-09T16:38:08.514Z] Copying: 594/1024 [MB] (24 MBps)
[2024-12-09T16:38:09.452Z] Copying: 618/1024 [MB] (23 MBps)
[2024-12-09T16:38:10.390Z] Copying: 641/1024 [MB] (23 MBps)
[2024-12-09T16:38:11.328Z] Copying: 666/1024 [MB] (24 MBps)
[2024-12-09T16:38:12.267Z] Copying: 690/1024 [MB] (23 MBps)
[2024-12-09T16:38:13.205Z] Copying: 713/1024 [MB] (23 MBps)
[2024-12-09T16:38:14.144Z] Copying: 737/1024 [MB] (23 MBps)
[2024-12-09T16:38:15.523Z] Copying: 761/1024 [MB] (24 MBps)
[2024-12-09T16:38:16.461Z] Copying: 785/1024 [MB] (23 MBps)
[2024-12-09T16:38:17.400Z] Copying: 809/1024 [MB] (24 MBps)
[2024-12-09T16:38:18.338Z] Copying: 834/1024 [MB] (24 MBps)
[2024-12-09T16:38:19.277Z] Copying: 859/1024 [MB] (24 MBps)
[2024-12-09T16:38:20.215Z] Copying: 883/1024 [MB] (24 MBps)
[2024-12-09T16:38:21.153Z] Copying: 908/1024 [MB] (24 MBps)
[2024-12-09T16:38:22.534Z] Copying: 932/1024 [MB] (24 MBps)
[2024-12-09T16:38:23.471Z] Copying: 956/1024 [MB] (24 MBps)
[2024-12-09T16:38:24.410Z] Copying: 980/1024 [MB] (24 MBps)
[2024-12-09T16:38:24.978Z] Copying: 1005/1024 [MB] (24 MBps)
[2024-12-09T16:38:25.239Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-09 16:38:25.015288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.060  [2024-12-09 16:38:25.015360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:27:56.060  [2024-12-09 16:38:25.015387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:27:56.060  [2024-12-09 16:38:25.015402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.060  [2024-12-09 16:38:25.015434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:27:56.060  [2024-12-09 16:38:25.021396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.060  [2024-12-09 16:38:25.021437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:27:56.060  [2024-12-09 16:38:25.021454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.946 ms
00:27:56.060  [2024-12-09 16:38:25.021468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.060  [2024-12-09 16:38:25.021720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.060  [2024-12-09 16:38:25.021736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:27:56.060  [2024-12-09 16:38:25.021750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.215 ms
00:27:56.060  [2024-12-09 16:38:25.021770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.060  [2024-12-09 16:38:25.028934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.060  [2024-12-09 16:38:25.029093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:27:56.060  [2024-12-09 16:38:25.029193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.153 ms
00:27:56.060  [2024-12-09 16:38:25.029232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.060  [2024-12-09 16:38:25.034027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.060  [2024-12-09 16:38:25.034148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:27:56.060  [2024-12-09 16:38:25.034298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.728 ms
00:27:56.060  [2024-12-09 16:38:25.034341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.060  [2024-12-09 16:38:25.069478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.060  [2024-12-09 16:38:25.069667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:27:56.060  [2024-12-09 16:38:25.069806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.127 ms
00:27:56.060  [2024-12-09 16:38:25.069860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.060  [2024-12-09 16:38:25.090128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.060  [2024-12-09 16:38:25.090262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:27:56.060  [2024-12-09 16:38:25.090339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.202 ms
00:27:56.060  [2024-12-09 16:38:25.090374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.320  [2024-12-09 16:38:25.247603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.320  [2024-12-09 16:38:25.247748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:27:56.320  [2024-12-09 16:38:25.247830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 157.407 ms
00:27:56.320  [2024-12-09 16:38:25.247867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.320  [2024-12-09 16:38:25.282704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.320  [2024-12-09 16:38:25.282831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:27:56.320  [2024-12-09 16:38:25.282979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.839 ms
00:27:56.320  [2024-12-09 16:38:25.283001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.320  [2024-12-09 16:38:25.317392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.320  [2024-12-09 16:38:25.317429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:27:56.320  [2024-12-09 16:38:25.317441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.411 ms
00:27:56.320  [2024-12-09 16:38:25.317451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.320  [2024-12-09 16:38:25.351291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.320  [2024-12-09 16:38:25.351455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:27:56.320  [2024-12-09 16:38:25.351476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.859 ms
00:27:56.320  [2024-12-09 16:38:25.351486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.320  [2024-12-09 16:38:25.384862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.320  [2024-12-09 16:38:25.384912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:27:56.320  [2024-12-09 16:38:25.384937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.354 ms
00:27:56.320  [2024-12-09 16:38:25.384953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.320  [2024-12-09 16:38:25.384995] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:27:56.320  [2024-12-09 16:38:25.385012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   131072 / 261120 	wr_cnt: 1	state: open
00:27:56.320  [2024-12-09 16:38:25.385024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.320  [2024-12-09 16:38:25.385042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.385990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.386000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.386013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.386023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.321  [2024-12-09 16:38:25.386033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.322  [2024-12-09 16:38:25.386044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.322  [2024-12-09 16:38:25.386055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.322  [2024-12-09 16:38:25.386065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:27:56.322  [2024-12-09 16:38:25.386083] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:27:56.322  [2024-12-09 16:38:25.386094] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         3fd7dc7f-5371-4379-8893-54820b2eff53
00:27:56.322  [2024-12-09 16:38:25.386105] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    131072
00:27:56.322  [2024-12-09 16:38:25.386116] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        52416
00:27:56.322  [2024-12-09 16:38:25.386125] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         51456
00:27:56.322  [2024-12-09 16:38:25.386136] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.0187
00:27:56.322  [2024-12-09 16:38:25.386152] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:27:56.322  [2024-12-09 16:38:25.386172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:27:56.322  [2024-12-09 16:38:25.386182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:27:56.322  [2024-12-09 16:38:25.386192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:27:56.322  [2024-12-09 16:38:25.386201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:27:56.322  [2024-12-09 16:38:25.386211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.322  [2024-12-09 16:38:25.386221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:27:56.322  [2024-12-09 16:38:25.386232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.220 ms
00:27:56.322  [2024-12-09 16:38:25.386242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.322  [2024-12-09 16:38:25.404730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.322  [2024-12-09 16:38:25.404856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:27:56.322  [2024-12-09 16:38:25.404881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.482 ms
00:27:56.322  [2024-12-09 16:38:25.404907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.322  [2024-12-09 16:38:25.405487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:56.322  [2024-12-09 16:38:25.405501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:27:56.322  [2024-12-09 16:38:25.405512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.534 ms
00:27:56.322  [2024-12-09 16:38:25.405522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.322  [2024-12-09 16:38:25.453241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.322  [2024-12-09 16:38:25.453278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:27:56.322  [2024-12-09 16:38:25.453291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.322  [2024-12-09 16:38:25.453301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.322  [2024-12-09 16:38:25.453349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.322  [2024-12-09 16:38:25.453359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:27:56.322  [2024-12-09 16:38:25.453369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.322  [2024-12-09 16:38:25.453378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.322  [2024-12-09 16:38:25.453447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.322  [2024-12-09 16:38:25.453460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:27:56.322  [2024-12-09 16:38:25.453475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.322  [2024-12-09 16:38:25.453483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.322  [2024-12-09 16:38:25.453499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.322  [2024-12-09 16:38:25.453509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:27:56.322  [2024-12-09 16:38:25.453519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.322  [2024-12-09 16:38:25.453528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.581  [2024-12-09 16:38:25.569342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.581  [2024-12-09 16:38:25.569394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:27:56.581  [2024-12-09 16:38:25.569409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.581  [2024-12-09 16:38:25.569419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.581  [2024-12-09 16:38:25.663751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.581  [2024-12-09 16:38:25.663795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:27:56.581  [2024-12-09 16:38:25.663809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.581  [2024-12-09 16:38:25.663820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.581  [2024-12-09 16:38:25.663919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.581  [2024-12-09 16:38:25.663940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:27:56.581  [2024-12-09 16:38:25.663957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.581  [2024-12-09 16:38:25.663978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.581  [2024-12-09 16:38:25.664037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.581  [2024-12-09 16:38:25.664051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:27:56.581  [2024-12-09 16:38:25.664062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.581  [2024-12-09 16:38:25.664071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.581  [2024-12-09 16:38:25.664184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.581  [2024-12-09 16:38:25.664198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:27:56.581  [2024-12-09 16:38:25.664208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.581  [2024-12-09 16:38:25.664217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.581  [2024-12-09 16:38:25.664255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.581  [2024-12-09 16:38:25.664267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:27:56.581  [2024-12-09 16:38:25.664277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.581  [2024-12-09 16:38:25.664286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.581  [2024-12-09 16:38:25.664337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.581  [2024-12-09 16:38:25.664348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:27:56.581  [2024-12-09 16:38:25.664358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.581  [2024-12-09 16:38:25.664368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.581  [2024-12-09 16:38:25.664412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:56.581  [2024-12-09 16:38:25.664424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:27:56.581  [2024-12-09 16:38:25.664434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:56.581  [2024-12-09 16:38:25.664444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:56.581  [2024-12-09 16:38:25.664560] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 650.301 ms, result 0
00:27:57.519  
00:27:57.519  
00:27:57.519   16:38:26 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:27:59.425  /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80125
00:27:59.425   16:38:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 80125 ']'
00:27:59.425   16:38:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 80125
00:27:59.425  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80125) - No such process
00:27:59.425  Process with pid 80125 is not found
00:27:59.425   16:38:28 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 80125 is not found'
00:27:59.425  Remove shared memory files
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f
00:27:59.425   16:38:28 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:27:59.426   16:38:28 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f
00:27:59.426  ************************************
00:27:59.426  END TEST ftl_restore
00:27:59.426  ************************************
00:27:59.426  
00:27:59.426  real	3m28.407s
00:27:59.426  user	3m16.517s
00:27:59.426  sys	0m13.245s
00:27:59.426   16:38:28 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:59.426   16:38:28 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x
00:27:59.685   16:38:28 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0
00:27:59.685   16:38:28 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:27:59.685   16:38:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:59.685   16:38:28 ftl -- common/autotest_common.sh@10 -- # set +x
00:27:59.685  ************************************
00:27:59.685  START TEST ftl_dirty_shutdown
00:27:59.685  ************************************
00:27:59.685   16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0
00:27:59.685  * Looking for test storage...
00:27:59.685  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:27:59.685    16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:59.685     16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version
00:27:59.685     16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-:
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-:
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<'
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:59.945     16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1
00:27:59.945     16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1
00:27:59.945     16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:59.945     16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1
00:27:59.945     16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2
00:27:59.945     16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2
00:27:59.945     16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:59.945     16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:59.945  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:59.945  		--rc genhtml_branch_coverage=1
00:27:59.945  		--rc genhtml_function_coverage=1
00:27:59.945  		--rc genhtml_legend=1
00:27:59.945  		--rc geninfo_all_blocks=1
00:27:59.945  		--rc geninfo_unexecuted_blocks=1
00:27:59.945  		
00:27:59.945  		'
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:59.945  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:59.945  		--rc genhtml_branch_coverage=1
00:27:59.945  		--rc genhtml_function_coverage=1
00:27:59.945  		--rc genhtml_legend=1
00:27:59.945  		--rc geninfo_all_blocks=1
00:27:59.945  		--rc geninfo_unexecuted_blocks=1
00:27:59.945  		
00:27:59.945  		'
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:59.945  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:59.945  		--rc genhtml_branch_coverage=1
00:27:59.945  		--rc genhtml_function_coverage=1
00:27:59.945  		--rc genhtml_legend=1
00:27:59.945  		--rc geninfo_all_blocks=1
00:27:59.945  		--rc geninfo_unexecuted_blocks=1
00:27:59.945  		
00:27:59.945  		'
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:59.945  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:59.945  		--rc genhtml_branch_coverage=1
00:27:59.945  		--rc genhtml_function_coverage=1
00:27:59.945  		--rc genhtml_legend=1
00:27:59.945  		--rc geninfo_all_blocks=1
00:27:59.945  		--rc geninfo_unexecuted_blocks=1
00:27:59.945  		
00:27:59.945  		'
00:27:59.945   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:27:59.945      16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh
00:27:59.945     16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:27:59.945     16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:27:59.945    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid=
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:59.946    16:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82345
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82345
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82345 ']'
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:59.946  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:59.946   16:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x
00:27:59.946  [2024-12-09 16:38:29.044465] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:27:59.946  [2024-12-09 16:38:29.044723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82345 ]
00:28:00.205  [2024-12-09 16:38:29.224554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:00.205  [2024-12-09 16:38:29.330151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:28:01.143   16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:01.143   16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0
00:28:01.143    16:38:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:28:01.143    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0
00:28:01.144    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:28:01.144    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424
00:28:01.144    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev
00:28:01.144     16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:28:01.403    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:28:01.403    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size
00:28:01.403     16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:28:01.403     16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:28:01.403     16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:28:01.403     16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:28:01.403     16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:28:01.403      16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:28:01.662     16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:28:01.662    {
00:28:01.662      "name": "nvme0n1",
00:28:01.662      "aliases": [
00:28:01.663        "d93682dd-a490-4d8b-9f1e-45ef5665c9a3"
00:28:01.663      ],
00:28:01.663      "product_name": "NVMe disk",
00:28:01.663      "block_size": 4096,
00:28:01.663      "num_blocks": 1310720,
00:28:01.663      "uuid": "d93682dd-a490-4d8b-9f1e-45ef5665c9a3",
00:28:01.663      "numa_id": -1,
00:28:01.663      "assigned_rate_limits": {
00:28:01.663        "rw_ios_per_sec": 0,
00:28:01.663        "rw_mbytes_per_sec": 0,
00:28:01.663        "r_mbytes_per_sec": 0,
00:28:01.663        "w_mbytes_per_sec": 0
00:28:01.663      },
00:28:01.663      "claimed": true,
00:28:01.663      "claim_type": "read_many_write_one",
00:28:01.663      "zoned": false,
00:28:01.663      "supported_io_types": {
00:28:01.663        "read": true,
00:28:01.663        "write": true,
00:28:01.663        "unmap": true,
00:28:01.663        "flush": true,
00:28:01.663        "reset": true,
00:28:01.663        "nvme_admin": true,
00:28:01.663        "nvme_io": true,
00:28:01.663        "nvme_io_md": false,
00:28:01.663        "write_zeroes": true,
00:28:01.663        "zcopy": false,
00:28:01.663        "get_zone_info": false,
00:28:01.663        "zone_management": false,
00:28:01.663        "zone_append": false,
00:28:01.663        "compare": true,
00:28:01.663        "compare_and_write": false,
00:28:01.663        "abort": true,
00:28:01.663        "seek_hole": false,
00:28:01.663        "seek_data": false,
00:28:01.663        "copy": true,
00:28:01.663        "nvme_iov_md": false
00:28:01.663      },
00:28:01.663      "driver_specific": {
00:28:01.663        "nvme": [
00:28:01.663          {
00:28:01.663            "pci_address": "0000:00:11.0",
00:28:01.663            "trid": {
00:28:01.663              "trtype": "PCIe",
00:28:01.663              "traddr": "0000:00:11.0"
00:28:01.663            },
00:28:01.663            "ctrlr_data": {
00:28:01.663              "cntlid": 0,
00:28:01.663              "vendor_id": "0x1b36",
00:28:01.663              "model_number": "QEMU NVMe Ctrl",
00:28:01.663              "serial_number": "12341",
00:28:01.663              "firmware_revision": "8.0.0",
00:28:01.663              "subnqn": "nqn.2019-08.org.qemu:12341",
00:28:01.663              "oacs": {
00:28:01.663                "security": 0,
00:28:01.663                "format": 1,
00:28:01.663                "firmware": 0,
00:28:01.663                "ns_manage": 1
00:28:01.663              },
00:28:01.663              "multi_ctrlr": false,
00:28:01.663              "ana_reporting": false
00:28:01.663            },
00:28:01.663            "vs": {
00:28:01.663              "nvme_version": "1.4"
00:28:01.663            },
00:28:01.663            "ns_data": {
00:28:01.663              "id": 1,
00:28:01.663              "can_share": false
00:28:01.663            }
00:28:01.663          }
00:28:01.663        ],
00:28:01.663        "mp_policy": "active_passive"
00:28:01.663      }
00:28:01.663    }
00:28:01.663  ]'
00:28:01.663      16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:28:01.663     16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:28:01.663      16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:28:01.663     16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720
00:28:01.663     16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:28:01.663     16:38:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120
00:28:01.663    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120
00:28:01.663    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:28:01.663    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols
00:28:01.663     16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:28:01.663     16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:28:01.922    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=9678bee3-58ec-4718-8c5c-1d25d8f4eda6
00:28:01.922    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores
00:28:01.922    16:38:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9678bee3-58ec-4718-8c5c-1d25d8f4eda6
00:28:02.182     16:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:28:02.441    16:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=872e902a-e743-4837-930d-d0acc78f5762
00:28:02.441    16:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 872e902a-e743-4837-930d-d0acc78f5762
00:28:02.441   16:38:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:02.441   16:38:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']'
00:28:02.441    16:38:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:02.441    16:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0
00:28:02.441    16:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:28:02.441    16:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:02.441    16:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size=
00:28:02.441     16:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:02.441     16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:02.441     16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:28:02.441     16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:28:02.441     16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:28:02.441      16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:02.701     16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:28:02.701    {
00:28:02.701      "name": "b1398f9c-0cae-427c-ba8a-dd0666b8d837",
00:28:02.701      "aliases": [
00:28:02.701        "lvs/nvme0n1p0"
00:28:02.701      ],
00:28:02.701      "product_name": "Logical Volume",
00:28:02.701      "block_size": 4096,
00:28:02.701      "num_blocks": 26476544,
00:28:02.701      "uuid": "b1398f9c-0cae-427c-ba8a-dd0666b8d837",
00:28:02.701      "assigned_rate_limits": {
00:28:02.701        "rw_ios_per_sec": 0,
00:28:02.701        "rw_mbytes_per_sec": 0,
00:28:02.701        "r_mbytes_per_sec": 0,
00:28:02.701        "w_mbytes_per_sec": 0
00:28:02.701      },
00:28:02.701      "claimed": false,
00:28:02.701      "zoned": false,
00:28:02.701      "supported_io_types": {
00:28:02.701        "read": true,
00:28:02.701        "write": true,
00:28:02.701        "unmap": true,
00:28:02.701        "flush": false,
00:28:02.701        "reset": true,
00:28:02.701        "nvme_admin": false,
00:28:02.701        "nvme_io": false,
00:28:02.701        "nvme_io_md": false,
00:28:02.701        "write_zeroes": true,
00:28:02.701        "zcopy": false,
00:28:02.701        "get_zone_info": false,
00:28:02.701        "zone_management": false,
00:28:02.701        "zone_append": false,
00:28:02.701        "compare": false,
00:28:02.701        "compare_and_write": false,
00:28:02.701        "abort": false,
00:28:02.701        "seek_hole": true,
00:28:02.701        "seek_data": true,
00:28:02.701        "copy": false,
00:28:02.701        "nvme_iov_md": false
00:28:02.701      },
00:28:02.701      "driver_specific": {
00:28:02.701        "lvol": {
00:28:02.701          "lvol_store_uuid": "872e902a-e743-4837-930d-d0acc78f5762",
00:28:02.701          "base_bdev": "nvme0n1",
00:28:02.701          "thin_provision": true,
00:28:02.701          "num_allocated_clusters": 0,
00:28:02.701          "snapshot": false,
00:28:02.701          "clone": false,
00:28:02.701          "esnap_clone": false
00:28:02.701        }
00:28:02.701      }
00:28:02.701    }
00:28:02.701  ]'
00:28:02.701      16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:28:02.701     16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:28:02.701      16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:28:02.961     16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544
00:28:02.961     16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:28:02.961     16:38:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424
00:28:02.961    16:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171
00:28:02.961    16:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev
00:28:02.961     16:38:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:28:03.220    16:38:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:28:03.220    16:38:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]]
00:28:03.220     16:38:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:03.220     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:03.220     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:28:03.220     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:28:03.220     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:28:03.220      16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:03.220     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:28:03.220    {
00:28:03.220      "name": "b1398f9c-0cae-427c-ba8a-dd0666b8d837",
00:28:03.220      "aliases": [
00:28:03.220        "lvs/nvme0n1p0"
00:28:03.220      ],
00:28:03.220      "product_name": "Logical Volume",
00:28:03.220      "block_size": 4096,
00:28:03.220      "num_blocks": 26476544,
00:28:03.220      "uuid": "b1398f9c-0cae-427c-ba8a-dd0666b8d837",
00:28:03.220      "assigned_rate_limits": {
00:28:03.220        "rw_ios_per_sec": 0,
00:28:03.220        "rw_mbytes_per_sec": 0,
00:28:03.220        "r_mbytes_per_sec": 0,
00:28:03.220        "w_mbytes_per_sec": 0
00:28:03.220      },
00:28:03.220      "claimed": false,
00:28:03.220      "zoned": false,
00:28:03.220      "supported_io_types": {
00:28:03.220        "read": true,
00:28:03.220        "write": true,
00:28:03.220        "unmap": true,
00:28:03.220        "flush": false,
00:28:03.220        "reset": true,
00:28:03.220        "nvme_admin": false,
00:28:03.220        "nvme_io": false,
00:28:03.220        "nvme_io_md": false,
00:28:03.220        "write_zeroes": true,
00:28:03.220        "zcopy": false,
00:28:03.220        "get_zone_info": false,
00:28:03.220        "zone_management": false,
00:28:03.220        "zone_append": false,
00:28:03.220        "compare": false,
00:28:03.220        "compare_and_write": false,
00:28:03.220        "abort": false,
00:28:03.220        "seek_hole": true,
00:28:03.220        "seek_data": true,
00:28:03.220        "copy": false,
00:28:03.220        "nvme_iov_md": false
00:28:03.220      },
00:28:03.220      "driver_specific": {
00:28:03.220        "lvol": {
00:28:03.220          "lvol_store_uuid": "872e902a-e743-4837-930d-d0acc78f5762",
00:28:03.220          "base_bdev": "nvme0n1",
00:28:03.220          "thin_provision": true,
00:28:03.220          "num_allocated_clusters": 0,
00:28:03.220          "snapshot": false,
00:28:03.220          "clone": false,
00:28:03.220          "esnap_clone": false
00:28:03.220        }
00:28:03.220      }
00:28:03.220    }
00:28:03.220  ]'
00:28:03.479      16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:28:03.479     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:28:03.479      16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:28:03.479     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544
00:28:03.479     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:28:03.479     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424
00:28:03.479    16:38:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171
00:28:03.479    16:38:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:28:03.739   16:38:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0
00:28:03.739    16:38:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:03.739    16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:03.739    16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:28:03.739    16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:28:03.739    16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:28:03.739     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1398f9c-0cae-427c-ba8a-dd0666b8d837
00:28:03.739    16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:28:03.739    {
00:28:03.739      "name": "b1398f9c-0cae-427c-ba8a-dd0666b8d837",
00:28:03.739      "aliases": [
00:28:03.739        "lvs/nvme0n1p0"
00:28:03.739      ],
00:28:03.739      "product_name": "Logical Volume",
00:28:03.739      "block_size": 4096,
00:28:03.739      "num_blocks": 26476544,
00:28:03.739      "uuid": "b1398f9c-0cae-427c-ba8a-dd0666b8d837",
00:28:03.739      "assigned_rate_limits": {
00:28:03.739        "rw_ios_per_sec": 0,
00:28:03.739        "rw_mbytes_per_sec": 0,
00:28:03.739        "r_mbytes_per_sec": 0,
00:28:03.739        "w_mbytes_per_sec": 0
00:28:03.739      },
00:28:03.739      "claimed": false,
00:28:03.739      "zoned": false,
00:28:03.739      "supported_io_types": {
00:28:03.739        "read": true,
00:28:03.739        "write": true,
00:28:03.739        "unmap": true,
00:28:03.739        "flush": false,
00:28:03.739        "reset": true,
00:28:03.739        "nvme_admin": false,
00:28:03.739        "nvme_io": false,
00:28:03.739        "nvme_io_md": false,
00:28:03.739        "write_zeroes": true,
00:28:03.739        "zcopy": false,
00:28:03.739        "get_zone_info": false,
00:28:03.739        "zone_management": false,
00:28:03.739        "zone_append": false,
00:28:03.739        "compare": false,
00:28:03.739        "compare_and_write": false,
00:28:03.739        "abort": false,
00:28:03.739        "seek_hole": true,
00:28:03.739        "seek_data": true,
00:28:03.739        "copy": false,
00:28:03.739        "nvme_iov_md": false
00:28:03.739      },
00:28:03.739      "driver_specific": {
00:28:03.739        "lvol": {
00:28:03.739          "lvol_store_uuid": "872e902a-e743-4837-930d-d0acc78f5762",
00:28:03.739          "base_bdev": "nvme0n1",
00:28:03.739          "thin_provision": true,
00:28:03.739          "num_allocated_clusters": 0,
00:28:03.739          "snapshot": false,
00:28:03.739          "clone": false,
00:28:03.739          "esnap_clone": false
00:28:03.739        }
00:28:03.739      }
00:28:03.739    }
00:28:03.739  ]'
00:28:03.739     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:28:04.000    16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:28:04.000     16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:28:04.000    16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544
00:28:04.000    16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:28:04.000    16:38:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424
00:28:04.000   16:38:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10
00:28:04.000   16:38:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b1398f9c-0cae-427c-ba8a-dd0666b8d837 --l2p_dram_limit 10'
00:28:04.000   16:38:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']'
00:28:04.000   16:38:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']'
00:28:04.000   16:38:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0'
00:28:04.000   16:38:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b1398f9c-0cae-427c-ba8a-dd0666b8d837 --l2p_dram_limit 10 -c nvc0n1p0
00:28:04.000  [2024-12-09 16:38:33.135625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.000  [2024-12-09 16:38:33.135673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:28:04.000  [2024-12-09 16:38:33.135691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:28:04.000  [2024-12-09 16:38:33.135718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.000  [2024-12-09 16:38:33.135777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.000  [2024-12-09 16:38:33.135789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:28:04.000  [2024-12-09 16:38:33.135802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.039 ms
00:28:04.000  [2024-12-09 16:38:33.135812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.000  [2024-12-09 16:38:33.135841] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:28:04.000  [2024-12-09 16:38:33.136793] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:28:04.000  [2024-12-09 16:38:33.136828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.000  [2024-12-09 16:38:33.136839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:28:04.000  [2024-12-09 16:38:33.136852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.995 ms
00:28:04.000  [2024-12-09 16:38:33.136862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.000  [2024-12-09 16:38:33.137001] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b49a9023-73d5-44b8-8ac1-3392f825704d
00:28:04.000  [2024-12-09 16:38:33.138447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.000  [2024-12-09 16:38:33.138480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:28:04.000  [2024-12-09 16:38:33.138493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.017 ms
00:28:04.000  [2024-12-09 16:38:33.138506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.000  [2024-12-09 16:38:33.146260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.000  [2024-12-09 16:38:33.146301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:28:04.000  [2024-12-09 16:38:33.146312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.724 ms
00:28:04.000  [2024-12-09 16:38:33.146340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.000  [2024-12-09 16:38:33.146436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.000  [2024-12-09 16:38:33.146452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:28:04.000  [2024-12-09 16:38:33.146463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.075 ms
00:28:04.000  [2024-12-09 16:38:33.146480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.000  [2024-12-09 16:38:33.146543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.000  [2024-12-09 16:38:33.146558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:28:04.000  [2024-12-09 16:38:33.146571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:28:04.000  [2024-12-09 16:38:33.146583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.000  [2024-12-09 16:38:33.146607] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:28:04.000  [2024-12-09 16:38:33.151279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.000  [2024-12-09 16:38:33.151312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:28:04.000  [2024-12-09 16:38:33.151327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.684 ms
00:28:04.000  [2024-12-09 16:38:33.151353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.000  [2024-12-09 16:38:33.151392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.000  [2024-12-09 16:38:33.151402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:28:04.000  [2024-12-09 16:38:33.151414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:28:04.000  [2024-12-09 16:38:33.151425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.000  [2024-12-09 16:38:33.151461] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:28:04.000  [2024-12-09 16:38:33.151591] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:28:04.000  [2024-12-09 16:38:33.151610] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:28:04.000  [2024-12-09 16:38:33.151623] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:28:04.000  [2024-12-09 16:38:33.151651] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:28:04.000  [2024-12-09 16:38:33.151663] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:28:04.000  [2024-12-09 16:38:33.151677] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:28:04.000  [2024-12-09 16:38:33.151686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:28:04.000  [2024-12-09 16:38:33.151703] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:28:04.000  [2024-12-09 16:38:33.151713] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:28:04.000  [2024-12-09 16:38:33.151725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.000  [2024-12-09 16:38:33.151746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:28:04.000  [2024-12-09 16:38:33.151760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.267 ms
00:28:04.000  [2024-12-09 16:38:33.151770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.001  [2024-12-09 16:38:33.151845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.001  [2024-12-09 16:38:33.151856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:28:04.001  [2024-12-09 16:38:33.151868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.055 ms
00:28:04.001  [2024-12-09 16:38:33.151878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.001  [2024-12-09 16:38:33.151993] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:28:04.001  [2024-12-09 16:38:33.152008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:28:04.001  [2024-12-09 16:38:33.152021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:28:04.001  [2024-12-09 16:38:33.152032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:28:04.001  [2024-12-09 16:38:33.152054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:28:04.001  [2024-12-09 16:38:33.152076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:28:04.001  [2024-12-09 16:38:33.152087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:28:04.001  [2024-12-09 16:38:33.152110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:28:04.001  [2024-12-09 16:38:33.152119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:28:04.001  [2024-12-09 16:38:33.152130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:28:04.001  [2024-12-09 16:38:33.152140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:28:04.001  [2024-12-09 16:38:33.152152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:28:04.001  [2024-12-09 16:38:33.152161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:28:04.001  [2024-12-09 16:38:33.152185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:28:04.001  [2024-12-09 16:38:33.152196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:28:04.001  [2024-12-09 16:38:33.152217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:04.001  [2024-12-09 16:38:33.152237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:28:04.001  [2024-12-09 16:38:33.152246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:04.001  [2024-12-09 16:38:33.152267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:28:04.001  [2024-12-09 16:38:33.152278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:04.001  [2024-12-09 16:38:33.152298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:28:04.001  [2024-12-09 16:38:33.152323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:04.001  [2024-12-09 16:38:33.152344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:28:04.001  [2024-12-09 16:38:33.152358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:28:04.001  [2024-12-09 16:38:33.152378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:28:04.001  [2024-12-09 16:38:33.152387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:28:04.001  [2024-12-09 16:38:33.152400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:28:04.001  [2024-12-09 16:38:33.152409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:28:04.001  [2024-12-09 16:38:33.152420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:28:04.001  [2024-12-09 16:38:33.152429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:28:04.001  [2024-12-09 16:38:33.152450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:28:04.001  [2024-12-09 16:38:33.152462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152470] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:28:04.001  [2024-12-09 16:38:33.152483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:28:04.001  [2024-12-09 16:38:33.152493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:28:04.001  [2024-12-09 16:38:33.152506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:04.001  [2024-12-09 16:38:33.152516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:28:04.001  [2024-12-09 16:38:33.152530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:28:04.001  [2024-12-09 16:38:33.152540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:28:04.001  [2024-12-09 16:38:33.152552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:28:04.001  [2024-12-09 16:38:33.152562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:28:04.001  [2024-12-09 16:38:33.152574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:28:04.001  [2024-12-09 16:38:33.152585] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:28:04.001  [2024-12-09 16:38:33.152603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:28:04.001  [2024-12-09 16:38:33.152615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:28:04.001  [2024-12-09 16:38:33.152629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:28:04.001  [2024-12-09 16:38:33.152640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:28:04.001  [2024-12-09 16:38:33.152653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:28:04.001  [2024-12-09 16:38:33.152663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:28:04.001  [2024-12-09 16:38:33.152676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:28:04.001  [2024-12-09 16:38:33.152687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:28:04.001  [2024-12-09 16:38:33.152701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:28:04.001  [2024-12-09 16:38:33.152711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:28:04.001  [2024-12-09 16:38:33.152727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:28:04.001  [2024-12-09 16:38:33.152737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:28:04.001  [2024-12-09 16:38:33.152749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:28:04.001  [2024-12-09 16:38:33.152759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:28:04.001  [2024-12-09 16:38:33.152772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:28:04.001  [2024-12-09 16:38:33.152782] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:28:04.001  [2024-12-09 16:38:33.152796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:28:04.001  [2024-12-09 16:38:33.152807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:28:04.001  [2024-12-09 16:38:33.152827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:28:04.001  [2024-12-09 16:38:33.152837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:28:04.001  [2024-12-09 16:38:33.152850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:28:04.001  [2024-12-09 16:38:33.152860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:04.001  [2024-12-09 16:38:33.152874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:28:04.001  [2024-12-09 16:38:33.152884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.926 ms
00:28:04.001  [2024-12-09 16:38:33.152897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:04.001  [2024-12-09 16:38:33.152947] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:28:04.001  [2024-12-09 16:38:33.152965] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:28:08.280  [2024-12-09 16:38:36.880331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.280  [2024-12-09 16:38:36.880397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:28:08.280  [2024-12-09 16:38:36.880414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3733.434 ms
00:28:08.280  [2024-12-09 16:38:36.880427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.280  [2024-12-09 16:38:36.917908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.280  [2024-12-09 16:38:36.917958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:28:08.280  [2024-12-09 16:38:36.917973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 37.240 ms
00:28:08.280  [2024-12-09 16:38:36.918002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.280  [2024-12-09 16:38:36.918134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.280  [2024-12-09 16:38:36.918149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:28:08.280  [2024-12-09 16:38:36.918159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.064 ms
00:28:08.280  [2024-12-09 16:38:36.918177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.280  [2024-12-09 16:38:36.963623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.280  [2024-12-09 16:38:36.963847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:28:08.280  [2024-12-09 16:38:36.964004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 45.480 ms
00:28:08.280  [2024-12-09 16:38:36.964047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.280  [2024-12-09 16:38:36.964106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.280  [2024-12-09 16:38:36.964147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:28:08.280  [2024-12-09 16:38:36.964178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:28:08.280  [2024-12-09 16:38:36.964285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.280  [2024-12-09 16:38:36.964815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.280  [2024-12-09 16:38:36.964862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:28:08.280  [2024-12-09 16:38:36.964968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.427 ms
00:28:08.280  [2024-12-09 16:38:36.965010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.280  [2024-12-09 16:38:36.965144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.280  [2024-12-09 16:38:36.965243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:28:08.280  [2024-12-09 16:38:36.965282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.078 ms
00:28:08.280  [2024-12-09 16:38:36.965317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.280  [2024-12-09 16:38:36.985774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.280  [2024-12-09 16:38:36.986200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:28:08.280  [2024-12-09 16:38:36.986224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.399 ms
00:28:08.280  [2024-12-09 16:38:36.986254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.280  [2024-12-09 16:38:37.012306] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:28:08.280  [2024-12-09 16:38:37.015598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.280  [2024-12-09 16:38:37.015628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:28:08.280  [2024-12-09 16:38:37.015643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 29.307 ms
00:28:08.280  [2024-12-09 16:38:37.015653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.280  [2024-12-09 16:38:37.107156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.280  [2024-12-09 16:38:37.107209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:28:08.281  [2024-12-09 16:38:37.107228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 91.619 ms
00:28:08.281  [2024-12-09 16:38:37.107238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.281  [2024-12-09 16:38:37.107407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.281  [2024-12-09 16:38:37.107423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:28:08.281  [2024-12-09 16:38:37.107439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.125 ms
00:28:08.281  [2024-12-09 16:38:37.107449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.281  [2024-12-09 16:38:37.142455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.281  [2024-12-09 16:38:37.142595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:28:08.281  [2024-12-09 16:38:37.142637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.012 ms
00:28:08.281  [2024-12-09 16:38:37.142648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.281  [2024-12-09 16:38:37.176825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.281  [2024-12-09 16:38:37.176859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:28:08.281  [2024-12-09 16:38:37.176876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.184 ms
00:28:08.281  [2024-12-09 16:38:37.176886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.281  [2024-12-09 16:38:37.177659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.281  [2024-12-09 16:38:37.177791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:28:08.281  [2024-12-09 16:38:37.177816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.716 ms
00:28:08.281  [2024-12-09 16:38:37.177829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.281  [2024-12-09 16:38:37.275321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.281  [2024-12-09 16:38:37.275481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:28:08.281  [2024-12-09 16:38:37.275511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 97.587 ms
00:28:08.281  [2024-12-09 16:38:37.275538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.281  [2024-12-09 16:38:37.313525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.281  [2024-12-09 16:38:37.313666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:28:08.281  [2024-12-09 16:38:37.313693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 37.865 ms
00:28:08.281  [2024-12-09 16:38:37.313703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.281  [2024-12-09 16:38:37.347565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.281  [2024-12-09 16:38:37.347602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:28:08.281  [2024-12-09 16:38:37.347618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.857 ms
00:28:08.281  [2024-12-09 16:38:37.347627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.281  [2024-12-09 16:38:37.382248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.281  [2024-12-09 16:38:37.382383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:28:08.281  [2024-12-09 16:38:37.382423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.634 ms
00:28:08.281  [2024-12-09 16:38:37.382433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.281  [2024-12-09 16:38:37.382477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.281  [2024-12-09 16:38:37.382489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:28:08.281  [2024-12-09 16:38:37.382506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:28:08.281  [2024-12-09 16:38:37.382515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.281  [2024-12-09 16:38:37.382625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:08.281  [2024-12-09 16:38:37.382641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:28:08.281  [2024-12-09 16:38:37.382655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.032 ms
00:28:08.281  [2024-12-09 16:38:37.382665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:08.281  [2024-12-09 16:38:37.383721] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4254.588 ms, result 0
00:28:08.281  {
00:28:08.281    "name": "ftl0",
00:28:08.281    "uuid": "b49a9023-73d5-44b8-8ac1-3392f825704d"
00:28:08.281  }
00:28:08.281   16:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": ['
00:28:08.281   16:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:28:08.540   16:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}'
00:28:08.540   16:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd
00:28:08.540   16:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0
00:28:08.799  /dev/nbd0
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct
00:28:08.799  1+0 records in
00:28:08.799  1+0 records out
00:28:08.799  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411264 s, 10.0 MB/s
00:28:08.799    16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0
00:28:08.799   16:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144
00:28:08.799  [2024-12-09 16:38:37.973340] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:28:08.799  [2024-12-09 16:38:37.973984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82497 ]
00:28:09.059  [2024-12-09 16:38:38.153572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:09.318  [2024-12-09 16:38:38.261605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:28:10.696  
[2024-12-09T16:38:40.811Z] Copying: 214/1024 [MB] (214 MBps)
[2024-12-09T16:38:41.748Z] Copying: 429/1024 [MB] (214 MBps)
[2024-12-09T16:38:42.685Z] Copying: 645/1024 [MB] (215 MBps)
[2024-12-09T16:38:43.622Z] Copying: 854/1024 [MB] (209 MBps)
[2024-12-09T16:38:44.559Z] Copying: 1024/1024 [MB] (average 211 MBps)
00:28:15.380  
00:28:15.380   16:38:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:28:17.284   16:38:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct
00:28:17.284  [2024-12-09 16:38:46.311489] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:28:17.284  [2024-12-09 16:38:46.311778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82584 ]
00:28:17.543  [2024-12-09 16:38:46.490790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:17.543  [2024-12-09 16:38:46.595678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:28:18.921  
[2024-12-09T16:38:49.036Z] Copying: 16/1024 [MB] (16 MBps)
[2024-12-09T16:38:49.974Z] Copying: 30/1024 [MB] (14 MBps)
[2024-12-09T16:38:50.911Z] Copying: 46/1024 [MB] (16 MBps)
[2024-12-09T16:38:52.288Z] Copying: 62/1024 [MB] (15 MBps)
[2024-12-09T16:38:53.227Z] Copying: 78/1024 [MB] (15 MBps)
[2024-12-09T16:38:54.164Z] Copying: 94/1024 [MB] (16 MBps)
[2024-12-09T16:38:55.101Z] Copying: 111/1024 [MB] (16 MBps)
[2024-12-09T16:38:56.038Z] Copying: 128/1024 [MB] (16 MBps)
[2024-12-09T16:38:56.976Z] Copying: 145/1024 [MB] (16 MBps)
[2024-12-09T16:38:57.913Z] Copying: 161/1024 [MB] (16 MBps)
[2024-12-09T16:38:59.291Z] Copying: 178/1024 [MB] (16 MBps)
[2024-12-09T16:39:00.228Z] Copying: 194/1024 [MB] (16 MBps)
[2024-12-09T16:39:01.165Z] Copying: 211/1024 [MB] (16 MBps)
[2024-12-09T16:39:02.100Z] Copying: 228/1024 [MB] (16 MBps)
[2024-12-09T16:39:03.036Z] Copying: 245/1024 [MB] (17 MBps)
[2024-12-09T16:39:04.008Z] Copying: 262/1024 [MB] (16 MBps)
[2024-12-09T16:39:04.944Z] Copying: 279/1024 [MB] (16 MBps)
[2024-12-09T16:39:06.321Z] Copying: 296/1024 [MB] (16 MBps)
[2024-12-09T16:39:06.889Z] Copying: 313/1024 [MB] (16 MBps)
[2024-12-09T16:39:08.267Z] Copying: 330/1024 [MB] (16 MBps)
[2024-12-09T16:39:09.204Z] Copying: 347/1024 [MB] (17 MBps)
[2024-12-09T16:39:10.142Z] Copying: 364/1024 [MB] (17 MBps)
[2024-12-09T16:39:11.079Z] Copying: 381/1024 [MB] (17 MBps)
[2024-12-09T16:39:12.016Z] Copying: 398/1024 [MB] (16 MBps)
[2024-12-09T16:39:12.953Z] Copying: 415/1024 [MB] (16 MBps)
[2024-12-09T16:39:13.890Z] Copying: 432/1024 [MB] (16 MBps)
[2024-12-09T16:39:15.269Z] Copying: 448/1024 [MB] (16 MBps)
[2024-12-09T16:39:16.211Z] Copying: 465/1024 [MB] (16 MBps)
[2024-12-09T16:39:17.149Z] Copying: 481/1024 [MB] (16 MBps)
[2024-12-09T16:39:18.086Z] Copying: 498/1024 [MB] (16 MBps)
[2024-12-09T16:39:19.024Z] Copying: 514/1024 [MB] (16 MBps)
[2024-12-09T16:39:19.961Z] Copying: 530/1024 [MB] (16 MBps)
[2024-12-09T16:39:20.898Z] Copying: 547/1024 [MB] (16 MBps)
[2024-12-09T16:39:22.276Z] Copying: 563/1024 [MB] (16 MBps)
[2024-12-09T16:39:23.220Z] Copying: 579/1024 [MB] (16 MBps)
[2024-12-09T16:39:24.157Z] Copying: 596/1024 [MB] (16 MBps)
[2024-12-09T16:39:25.094Z] Copying: 613/1024 [MB] (16 MBps)
[2024-12-09T16:39:26.030Z] Copying: 629/1024 [MB] (16 MBps)
[2024-12-09T16:39:26.968Z] Copying: 646/1024 [MB] (16 MBps)
[2024-12-09T16:39:27.904Z] Copying: 662/1024 [MB] (16 MBps)
[2024-12-09T16:39:29.283Z] Copying: 679/1024 [MB] (16 MBps)
[2024-12-09T16:39:29.851Z] Copying: 695/1024 [MB] (16 MBps)
[2024-12-09T16:39:31.229Z] Copying: 712/1024 [MB] (16 MBps)
[2024-12-09T16:39:32.166Z] Copying: 728/1024 [MB] (16 MBps)
[2024-12-09T16:39:33.182Z] Copying: 745/1024 [MB] (16 MBps)
[2024-12-09T16:39:34.119Z] Copying: 762/1024 [MB] (16 MBps)
[2024-12-09T16:39:35.056Z] Copying: 778/1024 [MB] (16 MBps)
[2024-12-09T16:39:35.993Z] Copying: 794/1024 [MB] (16 MBps)
[2024-12-09T16:39:36.930Z] Copying: 811/1024 [MB] (16 MBps)
[2024-12-09T16:39:37.867Z] Copying: 827/1024 [MB] (16 MBps)
[2024-12-09T16:39:39.245Z] Copying: 844/1024 [MB] (16 MBps)
[2024-12-09T16:39:40.182Z] Copying: 860/1024 [MB] (16 MBps)
[2024-12-09T16:39:41.118Z] Copying: 876/1024 [MB] (16 MBps)
[2024-12-09T16:39:42.056Z] Copying: 893/1024 [MB] (16 MBps)
[2024-12-09T16:39:42.993Z] Copying: 909/1024 [MB] (16 MBps)
[2024-12-09T16:39:43.929Z] Copying: 925/1024 [MB] (16 MBps)
[2024-12-09T16:39:44.865Z] Copying: 942/1024 [MB] (16 MBps)
[2024-12-09T16:39:46.243Z] Copying: 958/1024 [MB] (16 MBps)
[2024-12-09T16:39:47.180Z] Copying: 974/1024 [MB] (16 MBps)
[2024-12-09T16:39:48.116Z] Copying: 990/1024 [MB] (16 MBps)
[2024-12-09T16:39:49.052Z] Copying: 1006/1024 [MB] (16 MBps)
[2024-12-09T16:39:49.052Z] Copying: 1023/1024 [MB] (16 MBps)
[2024-12-09T16:39:49.990Z] Copying: 1024/1024 [MB] (average 16 MBps)
00:29:20.811  
00:29:20.811   16:39:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0
00:29:20.811   16:39:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0
00:29:21.071   16:39:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:29:21.330  [2024-12-09 16:39:50.307639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.330  [2024-12-09 16:39:50.307696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:29:21.330  [2024-12-09 16:39:50.307713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:29:21.330  [2024-12-09 16:39:50.307727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.330  [2024-12-09 16:39:50.307760] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:29:21.330  [2024-12-09 16:39:50.312330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.330  [2024-12-09 16:39:50.312364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:29:21.330  [2024-12-09 16:39:50.312381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.552 ms
00:29:21.330  [2024-12-09 16:39:50.312391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.330  [2024-12-09 16:39:50.314680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.330  [2024-12-09 16:39:50.314722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:29:21.330  [2024-12-09 16:39:50.314740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.253 ms
00:29:21.330  [2024-12-09 16:39:50.314751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.330  [2024-12-09 16:39:50.333118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.330  [2024-12-09 16:39:50.333159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:29:21.330  [2024-12-09 16:39:50.333177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.366 ms
00:29:21.330  [2024-12-09 16:39:50.333187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.330  [2024-12-09 16:39:50.337866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.330  [2024-12-09 16:39:50.337907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:29:21.330  [2024-12-09 16:39:50.337923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.642 ms
00:29:21.330  [2024-12-09 16:39:50.337933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.330  [2024-12-09 16:39:50.373760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.330  [2024-12-09 16:39:50.373987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:29:21.330  [2024-12-09 16:39:50.374021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.802 ms
00:29:21.330  [2024-12-09 16:39:50.374032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.330  [2024-12-09 16:39:50.396264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.330  [2024-12-09 16:39:50.396312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:29:21.330  [2024-12-09 16:39:50.396335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 22.218 ms
00:29:21.330  [2024-12-09 16:39:50.396345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.330  [2024-12-09 16:39:50.396502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.330  [2024-12-09 16:39:50.396516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:29:21.330  [2024-12-09 16:39:50.396531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.111 ms
00:29:21.330  [2024-12-09 16:39:50.396541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.330  [2024-12-09 16:39:50.431042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.330  [2024-12-09 16:39:50.431077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:29:21.330  [2024-12-09 16:39:50.431094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.535 ms
00:29:21.330  [2024-12-09 16:39:50.431103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.330  [2024-12-09 16:39:50.464112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.330  [2024-12-09 16:39:50.464264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:29:21.330  [2024-12-09 16:39:50.464289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.017 ms
00:29:21.330  [2024-12-09 16:39:50.464299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.330  [2024-12-09 16:39:50.497150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.330  [2024-12-09 16:39:50.497205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:29:21.330  [2024-12-09 16:39:50.497222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.828 ms
00:29:21.330  [2024-12-09 16:39:50.497232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.591  [2024-12-09 16:39:50.531055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.591  [2024-12-09 16:39:50.531090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:29:21.591  [2024-12-09 16:39:50.531106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.778 ms
00:29:21.591  [2024-12-09 16:39:50.531115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.591  [2024-12-09 16:39:50.531157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:29:21.591  [2024-12-09 16:39:50.531174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.591  [2024-12-09 16:39:50.531867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.531878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.531891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.531916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.531931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.531942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.531964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.531974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.531988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.531998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:29:21.592  [2024-12-09 16:39:50.532457] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:29:21.592  [2024-12-09 16:39:50.532469] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         b49a9023-73d5-44b8-8ac1-3392f825704d
00:29:21.592  [2024-12-09 16:39:50.532480] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:29:21.592  [2024-12-09 16:39:50.532503] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:29:21.592  [2024-12-09 16:39:50.532516] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:29:21.592  [2024-12-09 16:39:50.532530] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:29:21.592  [2024-12-09 16:39:50.532539] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:29:21.592  [2024-12-09 16:39:50.532556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:29:21.592  [2024-12-09 16:39:50.532566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:29:21.592  [2024-12-09 16:39:50.532578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:29:21.592  [2024-12-09 16:39:50.532586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:29:21.592  [2024-12-09 16:39:50.532599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.592  [2024-12-09 16:39:50.532608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:29:21.592  [2024-12-09 16:39:50.532622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.446 ms
00:29:21.592  [2024-12-09 16:39:50.532631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.592  [2024-12-09 16:39:50.552992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.592  [2024-12-09 16:39:50.553030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:29:21.592  [2024-12-09 16:39:50.553045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.339 ms
00:29:21.592  [2024-12-09 16:39:50.553056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.592  [2024-12-09 16:39:50.553659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:21.592  [2024-12-09 16:39:50.553674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:29:21.592  [2024-12-09 16:39:50.553688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.568 ms
00:29:21.592  [2024-12-09 16:39:50.553698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.592  [2024-12-09 16:39:50.621039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.592  [2024-12-09 16:39:50.621205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:29:21.592  [2024-12-09 16:39:50.621237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.592  [2024-12-09 16:39:50.621248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.592  [2024-12-09 16:39:50.621316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.592  [2024-12-09 16:39:50.621328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:29:21.592  [2024-12-09 16:39:50.621341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.592  [2024-12-09 16:39:50.621351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.592  [2024-12-09 16:39:50.621461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.592  [2024-12-09 16:39:50.621478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:29:21.592  [2024-12-09 16:39:50.621491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.592  [2024-12-09 16:39:50.621501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.592  [2024-12-09 16:39:50.621536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.592  [2024-12-09 16:39:50.621547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:29:21.592  [2024-12-09 16:39:50.621560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.592  [2024-12-09 16:39:50.621570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.592  [2024-12-09 16:39:50.749625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.592  [2024-12-09 16:39:50.749676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:29:21.592  [2024-12-09 16:39:50.749694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.592  [2024-12-09 16:39:50.749706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.852  [2024-12-09 16:39:50.850644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.852  [2024-12-09 16:39:50.850693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:29:21.852  [2024-12-09 16:39:50.850711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.852  [2024-12-09 16:39:50.850722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.852  [2024-12-09 16:39:50.850857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.852  [2024-12-09 16:39:50.850870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:29:21.852  [2024-12-09 16:39:50.850889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.852  [2024-12-09 16:39:50.850918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.852  [2024-12-09 16:39:50.850990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.852  [2024-12-09 16:39:50.851003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:29:21.852  [2024-12-09 16:39:50.851017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.852  [2024-12-09 16:39:50.851027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.852  [2024-12-09 16:39:50.851164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.852  [2024-12-09 16:39:50.851178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:29:21.852  [2024-12-09 16:39:50.851191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.852  [2024-12-09 16:39:50.851206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.852  [2024-12-09 16:39:50.851250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.852  [2024-12-09 16:39:50.851262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:29:21.852  [2024-12-09 16:39:50.851276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.852  [2024-12-09 16:39:50.851286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.852  [2024-12-09 16:39:50.851337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.852  [2024-12-09 16:39:50.851350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:29:21.852  [2024-12-09 16:39:50.851363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.852  [2024-12-09 16:39:50.851377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.852  [2024-12-09 16:39:50.851440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:21.852  [2024-12-09 16:39:50.851451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:29:21.852  [2024-12-09 16:39:50.851465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:21.852  [2024-12-09 16:39:50.851475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:21.852  [2024-12-09 16:39:50.851637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.834 ms, result 0
00:29:21.852  true
00:29:21.852   16:39:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82345
00:29:21.852   16:39:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82345
00:29:21.852   16:39:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144
00:29:21.852  [2024-12-09 16:39:50.987691] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:29:21.852  [2024-12-09 16:39:50.987811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83238 ]
00:29:22.111  [2024-12-09 16:39:51.171464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:22.371  [2024-12-09 16:39:51.301146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:29:23.751  
[2024-12-09T16:39:53.868Z] Copying: 204/1024 [MB] (204 MBps)
[2024-12-09T16:39:54.806Z] Copying: 420/1024 [MB] (216 MBps)
[2024-12-09T16:39:55.742Z] Copying: 638/1024 [MB] (218 MBps)
[2024-12-09T16:39:56.680Z] Copying: 852/1024 [MB] (213 MBps)
[2024-12-09T16:39:57.619Z] Copying: 1024/1024 [MB] (average 212 MBps)
00:29:28.440  
00:29:28.440  /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82345 Killed                  "$SPDK_BIN_DIR/spdk_tgt" -m 0x1
00:29:28.440   16:39:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:29:28.700  [2024-12-09 16:39:57.667909] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:29:28.700  [2024-12-09 16:39:57.668033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83312 ]
00:29:28.700  [2024-12-09 16:39:57.853304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:28.959  [2024-12-09 16:39:57.962481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:29:29.218  [2024-12-09 16:39:58.301702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:29:29.218  [2024-12-09 16:39:58.301773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:29:29.218  [2024-12-09 16:39:58.367754] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore
00:29:29.218  [2024-12-09 16:39:58.368085] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0
00:29:29.218  [2024-12-09 16:39:58.368239] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1
00:29:29.788  [2024-12-09 16:39:58.683146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.788  [2024-12-09 16:39:58.683191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:29:29.789  [2024-12-09 16:39:58.683206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:29:29.789  [2024-12-09 16:39:58.683219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.683266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.789  [2024-12-09 16:39:58.683277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:29:29.789  [2024-12-09 16:39:58.683287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.028 ms
00:29:29.789  [2024-12-09 16:39:58.683296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.683316] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:29:29.789  [2024-12-09 16:39:58.684264] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:29:29.789  [2024-12-09 16:39:58.684286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.789  [2024-12-09 16:39:58.684296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:29:29.789  [2024-12-09 16:39:58.684307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.975 ms
00:29:29.789  [2024-12-09 16:39:58.684317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.685763] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:29:29.789  [2024-12-09 16:39:58.703904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.789  [2024-12-09 16:39:58.703944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:29:29.789  [2024-12-09 16:39:58.703958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.172 ms
00:29:29.789  [2024-12-09 16:39:58.703968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.704027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.789  [2024-12-09 16:39:58.704040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:29:29.789  [2024-12-09 16:39:58.704050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.023 ms
00:29:29.789  [2024-12-09 16:39:58.704059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.710934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.789  [2024-12-09 16:39:58.710961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:29:29.789  [2024-12-09 16:39:58.710972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.818 ms
00:29:29.789  [2024-12-09 16:39:58.710982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.711054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.789  [2024-12-09 16:39:58.711066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:29:29.789  [2024-12-09 16:39:58.711077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.055 ms
00:29:29.789  [2024-12-09 16:39:58.711086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.711127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.789  [2024-12-09 16:39:58.711139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:29:29.789  [2024-12-09 16:39:58.711148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:29:29.789  [2024-12-09 16:39:58.711157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.711179] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:29:29.789  [2024-12-09 16:39:58.715875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.789  [2024-12-09 16:39:58.715929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:29:29.789  [2024-12-09 16:39:58.715942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.707 ms
00:29:29.789  [2024-12-09 16:39:58.715952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.715985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.789  [2024-12-09 16:39:58.715996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:29:29.789  [2024-12-09 16:39:58.716006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:29:29.789  [2024-12-09 16:39:58.716016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.716070] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:29:29.789  [2024-12-09 16:39:58.716094] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:29:29.789  [2024-12-09 16:39:58.716139] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:29:29.789  [2024-12-09 16:39:58.716156] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:29:29.789  [2024-12-09 16:39:58.716239] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:29:29.789  [2024-12-09 16:39:58.716252] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:29:29.789  [2024-12-09 16:39:58.716264] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:29:29.789  [2024-12-09 16:39:58.716280] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:29:29.789  [2024-12-09 16:39:58.716291] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:29:29.789  [2024-12-09 16:39:58.716302] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:29:29.789  [2024-12-09 16:39:58.716311] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:29:29.789  [2024-12-09 16:39:58.716320] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:29:29.789  [2024-12-09 16:39:58.716328] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:29:29.789  [2024-12-09 16:39:58.716339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.789  [2024-12-09 16:39:58.716348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:29:29.789  [2024-12-09 16:39:58.716358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.270 ms
00:29:29.789  [2024-12-09 16:39:58.716367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.716432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.789  [2024-12-09 16:39:58.716445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:29:29.789  [2024-12-09 16:39:58.716455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.050 ms
00:29:29.789  [2024-12-09 16:39:58.716464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.789  [2024-12-09 16:39:58.716547] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:29:29.789  [2024-12-09 16:39:58.716560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:29:29.789  [2024-12-09 16:39:58.716571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:29:29.789  [2024-12-09 16:39:58.716580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:29:29.789  [2024-12-09 16:39:58.716598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:29:29.789  [2024-12-09 16:39:58.716617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:29:29.789  [2024-12-09 16:39:58.716626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:29:29.789  [2024-12-09 16:39:58.716651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:29:29.789  [2024-12-09 16:39:58.716662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:29:29.789  [2024-12-09 16:39:58.716672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:29:29.789  [2024-12-09 16:39:58.716680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:29:29.789  [2024-12-09 16:39:58.716689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:29:29.789  [2024-12-09 16:39:58.716697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:29:29.789  [2024-12-09 16:39:58.716714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:29:29.789  [2024-12-09 16:39:58.716723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:29:29.789  [2024-12-09 16:39:58.716740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:29.789  [2024-12-09 16:39:58.716757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:29:29.789  [2024-12-09 16:39:58.716765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:29.789  [2024-12-09 16:39:58.716782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:29:29.789  [2024-12-09 16:39:58.716790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:29.789  [2024-12-09 16:39:58.716806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:29:29.789  [2024-12-09 16:39:58.716815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:29.789  [2024-12-09 16:39:58.716831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:29:29.789  [2024-12-09 16:39:58.716839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:29:29.789  [2024-12-09 16:39:58.716856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:29:29.789  [2024-12-09 16:39:58.716865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:29:29.789  [2024-12-09 16:39:58.716873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:29:29.789  [2024-12-09 16:39:58.716881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:29:29.789  [2024-12-09 16:39:58.716889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:29:29.789  [2024-12-09 16:39:58.716897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:29:29.789  [2024-12-09 16:39:58.716925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:29:29.789  [2024-12-09 16:39:58.716934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:29.789  [2024-12-09 16:39:58.716944] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:29:29.789  [2024-12-09 16:39:58.716953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:29:29.790  [2024-12-09 16:39:58.716966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:29:29.790  [2024-12-09 16:39:58.716975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:29.790  [2024-12-09 16:39:58.716984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:29:29.790  [2024-12-09 16:39:58.716993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:29:29.790  [2024-12-09 16:39:58.717002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:29:29.790  [2024-12-09 16:39:58.717011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:29:29.790  [2024-12-09 16:39:58.717019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:29:29.790  [2024-12-09 16:39:58.717027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:29:29.790  [2024-12-09 16:39:58.717037] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:29:29.790  [2024-12-09 16:39:58.717048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:29:29.790  [2024-12-09 16:39:58.717059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:29:29.790  [2024-12-09 16:39:58.717076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:29:29.790  [2024-12-09 16:39:58.717086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:29:29.790  [2024-12-09 16:39:58.717095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:29:29.790  [2024-12-09 16:39:58.717105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:29:29.790  [2024-12-09 16:39:58.717115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:29:29.790  [2024-12-09 16:39:58.717125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:29:29.790  [2024-12-09 16:39:58.717134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:29:29.790  [2024-12-09 16:39:58.717143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:29:29.790  [2024-12-09 16:39:58.717153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:29:29.790  [2024-12-09 16:39:58.717163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:29:29.790  [2024-12-09 16:39:58.717172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:29:29.790  [2024-12-09 16:39:58.717181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:29:29.790  [2024-12-09 16:39:58.717190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:29:29.790  [2024-12-09 16:39:58.717200] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:29:29.790  [2024-12-09 16:39:58.717210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:29:29.790  [2024-12-09 16:39:58.717222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:29:29.790  [2024-12-09 16:39:58.717231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:29:29.790  [2024-12-09 16:39:58.717241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:29:29.790  [2024-12-09 16:39:58.717251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:29:29.790  [2024-12-09 16:39:58.717262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.717271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:29:29.790  [2024-12-09 16:39:58.717280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.766 ms
00:29:29.790  [2024-12-09 16:39:58.717289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.756796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.756982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:29:29.790  [2024-12-09 16:39:58.757172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 39.523 ms
00:29:29.790  [2024-12-09 16:39:58.757211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.757379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.757418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:29:29.790  [2024-12-09 16:39:58.757594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.054 ms
00:29:29.790  [2024-12-09 16:39:58.757627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.815875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.816094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:29:29.790  [2024-12-09 16:39:58.816251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 58.252 ms
00:29:29.790  [2024-12-09 16:39:58.816290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.816353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.816386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:29:29.790  [2024-12-09 16:39:58.816464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:29:29.790  [2024-12-09 16:39:58.816498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.817043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.817171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:29:29.790  [2024-12-09 16:39:58.817241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.437 ms
00:29:29.790  [2024-12-09 16:39:58.817283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.817424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.817460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:29:29.790  [2024-12-09 16:39:58.817529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.094 ms
00:29:29.790  [2024-12-09 16:39:58.817562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.835993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.836126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:29:29.790  [2024-12-09 16:39:58.836218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.375 ms
00:29:29.790  [2024-12-09 16:39:58.836282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.854715] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:29:29.790  [2024-12-09 16:39:58.854870] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:29:29.790  [2024-12-09 16:39:58.855034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.855049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:29:29.790  [2024-12-09 16:39:58.855061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.644 ms
00:29:29.790  [2024-12-09 16:39:58.855071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.883952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.884108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:29:29.790  [2024-12-09 16:39:58.884263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.873 ms
00:29:29.790  [2024-12-09 16:39:58.884302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.902552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.902696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:29:29.790  [2024-12-09 16:39:58.902801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.218 ms
00:29:29.790  [2024-12-09 16:39:58.902837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.920086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.920253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:29:29.790  [2024-12-09 16:39:58.920361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.221 ms
00:29:29.790  [2024-12-09 16:39:58.920397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:29.790  [2024-12-09 16:39:58.921203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:29.790  [2024-12-09 16:39:58.921330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:29:29.790  [2024-12-09 16:39:58.921405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.675 ms
00:29:29.790  [2024-12-09 16:39:58.921471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:30.050  [2024-12-09 16:39:59.008190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:30.050  [2024-12-09 16:39:59.008373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:29:30.050  [2024-12-09 16:39:59.008461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 86.808 ms
00:29:30.050  [2024-12-09 16:39:59.008498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:30.050  [2024-12-09 16:39:59.019760] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:29:30.050  [2024-12-09 16:39:59.023162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:30.050  [2024-12-09 16:39:59.023320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:29:30.050  [2024-12-09 16:39:59.023457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.568 ms
00:29:30.050  [2024-12-09 16:39:59.023500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:30.050  [2024-12-09 16:39:59.023622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:30.050  [2024-12-09 16:39:59.023677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:29:30.050  [2024-12-09 16:39:59.023715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:29:30.050  [2024-12-09 16:39:59.023746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:30.050  [2024-12-09 16:39:59.023863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:30.050  [2024-12-09 16:39:59.023911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:29:30.050  [2024-12-09 16:39:59.023969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:29:30.050  [2024-12-09 16:39:59.024004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:30.050  [2024-12-09 16:39:59.024056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:30.050  [2024-12-09 16:39:59.024090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:29:30.050  [2024-12-09 16:39:59.024121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:29:30.050  [2024-12-09 16:39:59.024151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:30.050  [2024-12-09 16:39:59.024206] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:29:30.050  [2024-12-09 16:39:59.024317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:30.050  [2024-12-09 16:39:59.024354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:29:30.050  [2024-12-09 16:39:59.024385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.111 ms
00:29:30.050  [2024-12-09 16:39:59.024422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:30.050  [2024-12-09 16:39:59.061172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:30.050  [2024-12-09 16:39:59.061403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:29:30.050  [2024-12-09 16:39:59.061501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.758 ms
00:29:30.050  [2024-12-09 16:39:59.061538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:30.050  [2024-12-09 16:39:59.061672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:30.050  [2024-12-09 16:39:59.061713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:29:30.050  [2024-12-09 16:39:59.061797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.032 ms
00:29:30.050  [2024-12-09 16:39:59.061832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:30.050  [2024-12-09 16:39:59.063011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.032 ms, result 0
00:29:30.989  
[2024-12-09T16:40:01.116Z] Copying: 24/1024 [MB] (24 MBps)
[2024-12-09T16:40:02.494Z] Copying: 47/1024 [MB] (23 MBps)
[2024-12-09T16:40:03.432Z] Copying: 71/1024 [MB] (23 MBps)
[2024-12-09T16:40:04.367Z] Copying: 94/1024 [MB] (23 MBps)
[2024-12-09T16:40:05.303Z] Copying: 118/1024 [MB] (23 MBps)
[2024-12-09T16:40:06.240Z] Copying: 141/1024 [MB] (23 MBps)
[2024-12-09T16:40:07.178Z] Copying: 164/1024 [MB] (22 MBps)
[2024-12-09T16:40:08.116Z] Copying: 187/1024 [MB] (22 MBps)
[2024-12-09T16:40:09.495Z] Copying: 210/1024 [MB] (23 MBps)
[2024-12-09T16:40:10.064Z] Copying: 233/1024 [MB] (23 MBps)
[2024-12-09T16:40:11.444Z] Copying: 256/1024 [MB] (22 MBps)
[2024-12-09T16:40:12.380Z] Copying: 279/1024 [MB] (22 MBps)
[2024-12-09T16:40:13.320Z] Copying: 302/1024 [MB] (22 MBps)
[2024-12-09T16:40:14.258Z] Copying: 324/1024 [MB] (22 MBps)
[2024-12-09T16:40:15.196Z] Copying: 348/1024 [MB] (23 MBps)
[2024-12-09T16:40:16.134Z] Copying: 371/1024 [MB] (23 MBps)
[2024-12-09T16:40:17.072Z] Copying: 394/1024 [MB] (23 MBps)
[2024-12-09T16:40:18.452Z] Copying: 417/1024 [MB] (23 MBps)
[2024-12-09T16:40:19.390Z] Copying: 441/1024 [MB] (23 MBps)
[2024-12-09T16:40:20.329Z] Copying: 464/1024 [MB] (23 MBps)
[2024-12-09T16:40:21.268Z] Copying: 487/1024 [MB] (23 MBps)
[2024-12-09T16:40:22.206Z] Copying: 511/1024 [MB] (23 MBps)
[2024-12-09T16:40:23.144Z] Copying: 534/1024 [MB] (23 MBps)
[2024-12-09T16:40:24.083Z] Copying: 558/1024 [MB] (23 MBps)
[2024-12-09T16:40:25.462Z] Copying: 581/1024 [MB] (23 MBps)
[2024-12-09T16:40:26.400Z] Copying: 605/1024 [MB] (23 MBps)
[2024-12-09T16:40:27.338Z] Copying: 629/1024 [MB] (23 MBps)
[2024-12-09T16:40:28.276Z] Copying: 652/1024 [MB] (23 MBps)
[2024-12-09T16:40:29.214Z] Copying: 676/1024 [MB] (23 MBps)
[2024-12-09T16:40:30.215Z] Copying: 699/1024 [MB] (23 MBps)
[2024-12-09T16:40:31.153Z] Copying: 723/1024 [MB] (23 MBps)
[2024-12-09T16:40:32.090Z] Copying: 746/1024 [MB] (23 MBps)
[2024-12-09T16:40:33.027Z] Copying: 770/1024 [MB] (23 MBps)
[2024-12-09T16:40:34.412Z] Copying: 794/1024 [MB] (23 MBps)
[2024-12-09T16:40:35.350Z] Copying: 817/1024 [MB] (23 MBps)
[2024-12-09T16:40:36.288Z] Copying: 841/1024 [MB] (24 MBps)
[2024-12-09T16:40:37.225Z] Copying: 866/1024 [MB] (24 MBps)
[2024-12-09T16:40:38.163Z] Copying: 890/1024 [MB] (24 MBps)
[2024-12-09T16:40:39.100Z] Copying: 914/1024 [MB] (24 MBps)
[2024-12-09T16:40:40.037Z] Copying: 938/1024 [MB] (23 MBps)
[2024-12-09T16:40:41.416Z] Copying: 962/1024 [MB] (23 MBps)
[2024-12-09T16:40:42.355Z] Copying: 986/1024 [MB] (24 MBps)
[2024-12-09T16:40:43.293Z] Copying: 1009/1024 [MB] (23 MBps)
[2024-12-09T16:40:43.552Z] Copying: 1023/1024 [MB] (13 MBps)
[2024-12-09T16:40:43.552Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-09 16:40:43.364377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.373  [2024-12-09 16:40:43.364554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:30:14.373  [2024-12-09 16:40:43.364595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:30:14.373  [2024-12-09 16:40:43.364610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.373  [2024-12-09 16:40:43.365566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:30:14.373  [2024-12-09 16:40:43.370510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.373  [2024-12-09 16:40:43.370669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:30:14.373  [2024-12-09 16:40:43.370692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.917 ms
00:30:14.373  [2024-12-09 16:40:43.370709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.373  [2024-12-09 16:40:43.381468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.373  [2024-12-09 16:40:43.381622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:30:14.373  [2024-12-09 16:40:43.381644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.941 ms
00:30:14.373  [2024-12-09 16:40:43.381656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.373  [2024-12-09 16:40:43.405370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.373  [2024-12-09 16:40:43.405414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:30:14.373  [2024-12-09 16:40:43.405435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.728 ms
00:30:14.373  [2024-12-09 16:40:43.405446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.373  [2024-12-09 16:40:43.410457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.373  [2024-12-09 16:40:43.410490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:30:14.373  [2024-12-09 16:40:43.410502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.964 ms
00:30:14.373  [2024-12-09 16:40:43.410513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.373  [2024-12-09 16:40:43.445561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.373  [2024-12-09 16:40:43.445595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:30:14.373  [2024-12-09 16:40:43.445608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.063 ms
00:30:14.373  [2024-12-09 16:40:43.445618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.373  [2024-12-09 16:40:43.465780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.373  [2024-12-09 16:40:43.465944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:30:14.373  [2024-12-09 16:40:43.465965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.157 ms
00:30:14.373  [2024-12-09 16:40:43.465976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.634  [2024-12-09 16:40:43.588973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.634  [2024-12-09 16:40:43.589135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:30:14.634  [2024-12-09 16:40:43.589164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 123.140 ms
00:30:14.634  [2024-12-09 16:40:43.589175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.634  [2024-12-09 16:40:43.623698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.634  [2024-12-09 16:40:43.623835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:30:14.634  [2024-12-09 16:40:43.623854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.556 ms
00:30:14.634  [2024-12-09 16:40:43.623894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.634  [2024-12-09 16:40:43.658318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.634  [2024-12-09 16:40:43.658352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:30:14.634  [2024-12-09 16:40:43.658363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.430 ms
00:30:14.634  [2024-12-09 16:40:43.658372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.634  [2024-12-09 16:40:43.691847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.634  [2024-12-09 16:40:43.691881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:30:14.634  [2024-12-09 16:40:43.691905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.493 ms
00:30:14.634  [2024-12-09 16:40:43.691916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.634  [2024-12-09 16:40:43.725098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.634  [2024-12-09 16:40:43.725131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:30:14.634  [2024-12-09 16:40:43.725142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.147 ms
00:30:14.634  [2024-12-09 16:40:43.725151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.634  [2024-12-09 16:40:43.725187] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:30:14.634  [2024-12-09 16:40:43.725202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   110336 / 261120 	wr_cnt: 1	state: open
00:30:14.634  [2024-12-09 16:40:43.725214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.634  [2024-12-09 16:40:43.725784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.725994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:30:14.635  [2024-12-09 16:40:43.726246] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:30:14.635  [2024-12-09 16:40:43.726256] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         b49a9023-73d5-44b8-8ac1-3392f825704d
00:30:14.635  [2024-12-09 16:40:43.726281] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    110336
00:30:14.635  [2024-12-09 16:40:43.726290] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        111296
00:30:14.635  [2024-12-09 16:40:43.726300] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         110336
00:30:14.635  [2024-12-09 16:40:43.726310] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.0087
00:30:14.635  [2024-12-09 16:40:43.726319] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:30:14.635  [2024-12-09 16:40:43.726329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:30:14.635  [2024-12-09 16:40:43.726338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:30:14.635  [2024-12-09 16:40:43.726347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:30:14.635  [2024-12-09 16:40:43.726360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:30:14.635  [2024-12-09 16:40:43.726369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.635  [2024-12-09 16:40:43.726381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:30:14.635  [2024-12-09 16:40:43.726391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.185 ms
00:30:14.635  [2024-12-09 16:40:43.726400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.635  [2024-12-09 16:40:43.745465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.635  [2024-12-09 16:40:43.745498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:30:14.635  [2024-12-09 16:40:43.745510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.047 ms
00:30:14.635  [2024-12-09 16:40:43.745519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.635  [2024-12-09 16:40:43.746059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:14.635  [2024-12-09 16:40:43.746072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:30:14.635  [2024-12-09 16:40:43.746088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.521 ms
00:30:14.635  [2024-12-09 16:40:43.746097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.635  [2024-12-09 16:40:43.793374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.635  [2024-12-09 16:40:43.793406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:30:14.635  [2024-12-09 16:40:43.793419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.635  [2024-12-09 16:40:43.793429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.635  [2024-12-09 16:40:43.793476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.635  [2024-12-09 16:40:43.793486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:30:14.635  [2024-12-09 16:40:43.793501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.635  [2024-12-09 16:40:43.793511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.635  [2024-12-09 16:40:43.793584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.635  [2024-12-09 16:40:43.793605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:30:14.635  [2024-12-09 16:40:43.793614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.635  [2024-12-09 16:40:43.793624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.635  [2024-12-09 16:40:43.793639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.635  [2024-12-09 16:40:43.793649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:30:14.635  [2024-12-09 16:40:43.793659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.635  [2024-12-09 16:40:43.793668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.895  [2024-12-09 16:40:43.909398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.895  [2024-12-09 16:40:43.909446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:30:14.895  [2024-12-09 16:40:43.909459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.895  [2024-12-09 16:40:43.909469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.895  [2024-12-09 16:40:44.005555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.895  [2024-12-09 16:40:44.005601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:30:14.895  [2024-12-09 16:40:44.005614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.895  [2024-12-09 16:40:44.005631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.895  [2024-12-09 16:40:44.005720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.895  [2024-12-09 16:40:44.005731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:30:14.895  [2024-12-09 16:40:44.005742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.895  [2024-12-09 16:40:44.005751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.895  [2024-12-09 16:40:44.005788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.895  [2024-12-09 16:40:44.005799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:30:14.895  [2024-12-09 16:40:44.005809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.895  [2024-12-09 16:40:44.005818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.895  [2024-12-09 16:40:44.005954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.895  [2024-12-09 16:40:44.005978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:30:14.895  [2024-12-09 16:40:44.005988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.895  [2024-12-09 16:40:44.005999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.895  [2024-12-09 16:40:44.006035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.895  [2024-12-09 16:40:44.006048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:30:14.895  [2024-12-09 16:40:44.006057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.895  [2024-12-09 16:40:44.006067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.895  [2024-12-09 16:40:44.006106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.895  [2024-12-09 16:40:44.006117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:30:14.895  [2024-12-09 16:40:44.006127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.895  [2024-12-09 16:40:44.006136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.895  [2024-12-09 16:40:44.006176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:14.895  [2024-12-09 16:40:44.006188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:30:14.895  [2024-12-09 16:40:44.006198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:14.895  [2024-12-09 16:40:44.006207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:14.895  [2024-12-09 16:40:44.006328] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 645.949 ms, result 0
00:30:16.800  
00:30:16.800  
00:30:16.800   16:40:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2
00:30:18.178   16:40:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:30:18.178  [2024-12-09 16:40:47.335386] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:30:18.178  [2024-12-09 16:40:47.335495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83809 ]
00:30:18.437  [2024-12-09 16:40:47.514099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:18.697  [2024-12-09 16:40:47.625160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:30:18.956  [2024-12-09 16:40:47.992463] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:30:18.956  [2024-12-09 16:40:47.992543] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:30:19.217  [2024-12-09 16:40:48.154682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.154738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:30:19.217  [2024-12-09 16:40:48.154756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:30:19.217  [2024-12-09 16:40:48.154767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.154817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.154832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:30:19.217  [2024-12-09 16:40:48.154844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.028 ms
00:30:19.217  [2024-12-09 16:40:48.154856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.154879] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:30:19.217  [2024-12-09 16:40:48.155909] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:30:19.217  [2024-12-09 16:40:48.155952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.155965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:30:19.217  [2024-12-09 16:40:48.155978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.079 ms
00:30:19.217  [2024-12-09 16:40:48.155990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.157465] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:30:19.217  [2024-12-09 16:40:48.175980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.176025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:30:19.217  [2024-12-09 16:40:48.176042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.544 ms
00:30:19.217  [2024-12-09 16:40:48.176054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.176128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.176142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:30:19.217  [2024-12-09 16:40:48.176155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.024 ms
00:30:19.217  [2024-12-09 16:40:48.176166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.183238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.183435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:30:19.217  [2024-12-09 16:40:48.183458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.005 ms
00:30:19.217  [2024-12-09 16:40:48.183477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.183564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.183578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:30:19.217  [2024-12-09 16:40:48.183592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.061 ms
00:30:19.217  [2024-12-09 16:40:48.183603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.183651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.183664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:30:19.217  [2024-12-09 16:40:48.183677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:30:19.217  [2024-12-09 16:40:48.183689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.183722] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:30:19.217  [2024-12-09 16:40:48.188396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.188434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:30:19.217  [2024-12-09 16:40:48.188452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.687 ms
00:30:19.217  [2024-12-09 16:40:48.188463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.188500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.188514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:30:19.217  [2024-12-09 16:40:48.188543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:30:19.217  [2024-12-09 16:40:48.188556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.188615] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:30:19.217  [2024-12-09 16:40:48.188645] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:30:19.217  [2024-12-09 16:40:48.188683] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:30:19.217  [2024-12-09 16:40:48.188707] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:30:19.217  [2024-12-09 16:40:48.188799] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:30:19.217  [2024-12-09 16:40:48.188814] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:30:19.217  [2024-12-09 16:40:48.188830] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:30:19.217  [2024-12-09 16:40:48.188844] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:30:19.217  [2024-12-09 16:40:48.188858] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:30:19.217  [2024-12-09 16:40:48.188872] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:30:19.217  [2024-12-09 16:40:48.188884] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:30:19.217  [2024-12-09 16:40:48.188900] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:30:19.217  [2024-12-09 16:40:48.188930] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:30:19.217  [2024-12-09 16:40:48.188944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.188956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:30:19.217  [2024-12-09 16:40:48.188968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.332 ms
00:30:19.217  [2024-12-09 16:40:48.188979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.189058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.217  [2024-12-09 16:40:48.189071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:30:19.217  [2024-12-09 16:40:48.189084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.055 ms
00:30:19.217  [2024-12-09 16:40:48.189106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.217  [2024-12-09 16:40:48.189210] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:30:19.217  [2024-12-09 16:40:48.189228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:30:19.217  [2024-12-09 16:40:48.189241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:30:19.217  [2024-12-09 16:40:48.189254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:19.217  [2024-12-09 16:40:48.189266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:30:19.217  [2024-12-09 16:40:48.189277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:30:19.217  [2024-12-09 16:40:48.189289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:30:19.217  [2024-12-09 16:40:48.189300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:30:19.217  [2024-12-09 16:40:48.189312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:30:19.217  [2024-12-09 16:40:48.189323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:30:19.217  [2024-12-09 16:40:48.189334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:30:19.217  [2024-12-09 16:40:48.189346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:30:19.217  [2024-12-09 16:40:48.189357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:30:19.217  [2024-12-09 16:40:48.189387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:30:19.217  [2024-12-09 16:40:48.189400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:30:19.217  [2024-12-09 16:40:48.189411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:19.217  [2024-12-09 16:40:48.189423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:30:19.217  [2024-12-09 16:40:48.189434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:30:19.217  [2024-12-09 16:40:48.189446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:19.217  [2024-12-09 16:40:48.189458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:30:19.217  [2024-12-09 16:40:48.189469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:30:19.217  [2024-12-09 16:40:48.189480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:19.218  [2024-12-09 16:40:48.189491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:30:19.218  [2024-12-09 16:40:48.189502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:30:19.218  [2024-12-09 16:40:48.189513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:19.218  [2024-12-09 16:40:48.189524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:30:19.218  [2024-12-09 16:40:48.189535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:30:19.218  [2024-12-09 16:40:48.189545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:19.218  [2024-12-09 16:40:48.189556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:30:19.218  [2024-12-09 16:40:48.189567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:30:19.218  [2024-12-09 16:40:48.189578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:19.218  [2024-12-09 16:40:48.189589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:30:19.218  [2024-12-09 16:40:48.189599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:30:19.218  [2024-12-09 16:40:48.189610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:30:19.218  [2024-12-09 16:40:48.189621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:30:19.218  [2024-12-09 16:40:48.189632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:30:19.218  [2024-12-09 16:40:48.189643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:30:19.218  [2024-12-09 16:40:48.189654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:30:19.218  [2024-12-09 16:40:48.189666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:30:19.218  [2024-12-09 16:40:48.189677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:19.218  [2024-12-09 16:40:48.189688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:30:19.218  [2024-12-09 16:40:48.189699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:30:19.218  [2024-12-09 16:40:48.189710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:19.218  [2024-12-09 16:40:48.189721] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:30:19.218  [2024-12-09 16:40:48.189733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:30:19.218  [2024-12-09 16:40:48.189747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:30:19.218  [2024-12-09 16:40:48.189758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:19.218  [2024-12-09 16:40:48.189770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:30:19.218  [2024-12-09 16:40:48.189782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:30:19.218  [2024-12-09 16:40:48.189793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:30:19.218  [2024-12-09 16:40:48.189804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:30:19.218  [2024-12-09 16:40:48.189815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:30:19.218  [2024-12-09 16:40:48.189827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:30:19.218  [2024-12-09 16:40:48.189840] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:30:19.218  [2024-12-09 16:40:48.189855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:30:19.218  [2024-12-09 16:40:48.189875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:30:19.218  [2024-12-09 16:40:48.189888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:30:19.218  [2024-12-09 16:40:48.189916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:30:19.218  [2024-12-09 16:40:48.189929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:30:19.218  [2024-12-09 16:40:48.189943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:30:19.218  [2024-12-09 16:40:48.189955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:30:19.218  [2024-12-09 16:40:48.189968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:30:19.218  [2024-12-09 16:40:48.189980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:30:19.218  [2024-12-09 16:40:48.189992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:30:19.218  [2024-12-09 16:40:48.190004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:30:19.218  [2024-12-09 16:40:48.190016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:30:19.218  [2024-12-09 16:40:48.190028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:30:19.218  [2024-12-09 16:40:48.190040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:30:19.218  [2024-12-09 16:40:48.190053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:30:19.218  [2024-12-09 16:40:48.190065] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:30:19.218  [2024-12-09 16:40:48.190078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:30:19.218  [2024-12-09 16:40:48.190091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:30:19.218  [2024-12-09 16:40:48.190102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:30:19.218  [2024-12-09 16:40:48.190114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:30:19.218  [2024-12-09 16:40:48.190126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:30:19.218  [2024-12-09 16:40:48.190138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.218  [2024-12-09 16:40:48.190151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:30:19.218  [2024-12-09 16:40:48.190164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.984 ms
00:30:19.218  [2024-12-09 16:40:48.190177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.218  [2024-12-09 16:40:48.229065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.218  [2024-12-09 16:40:48.229111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:30:19.218  [2024-12-09 16:40:48.229126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.890 ms
00:30:19.218  [2024-12-09 16:40:48.229160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.218  [2024-12-09 16:40:48.229237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.218  [2024-12-09 16:40:48.229250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:30:19.218  [2024-12-09 16:40:48.229263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.049 ms
00:30:19.218  [2024-12-09 16:40:48.229275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.218  [2024-12-09 16:40:48.302854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.218  [2024-12-09 16:40:48.303063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:30:19.218  [2024-12-09 16:40:48.303088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 73.630 ms
00:30:19.218  [2024-12-09 16:40:48.303102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.218  [2024-12-09 16:40:48.303148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.218  [2024-12-09 16:40:48.303162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:30:19.218  [2024-12-09 16:40:48.303182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:30:19.218  [2024-12-09 16:40:48.303194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.218  [2024-12-09 16:40:48.303698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.218  [2024-12-09 16:40:48.303714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:30:19.218  [2024-12-09 16:40:48.303727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.423 ms
00:30:19.218  [2024-12-09 16:40:48.303739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.218  [2024-12-09 16:40:48.303861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.218  [2024-12-09 16:40:48.303876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:30:19.218  [2024-12-09 16:40:48.303915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.096 ms
00:30:19.218  [2024-12-09 16:40:48.303927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.218  [2024-12-09 16:40:48.323013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.218  [2024-12-09 16:40:48.323051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:30:19.218  [2024-12-09 16:40:48.323082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.089 ms
00:30:19.218  [2024-12-09 16:40:48.323097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.218  [2024-12-09 16:40:48.341911] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0
00:30:19.218  [2024-12-09 16:40:48.341957] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:30:19.218  [2024-12-09 16:40:48.341974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.218  [2024-12-09 16:40:48.341987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:30:19.218  [2024-12-09 16:40:48.342001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.795 ms
00:30:19.218  [2024-12-09 16:40:48.342012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.218  [2024-12-09 16:40:48.371757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.218  [2024-12-09 16:40:48.371934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:30:19.218  [2024-12-09 16:40:48.372042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 29.746 ms
00:30:19.218  [2024-12-09 16:40:48.372084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.218  [2024-12-09 16:40:48.389412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.218  [2024-12-09 16:40:48.389581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:30:19.218  [2024-12-09 16:40:48.389668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.279 ms
00:30:19.218  [2024-12-09 16:40:48.389707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.478  [2024-12-09 16:40:48.406539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.478  [2024-12-09 16:40:48.406697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:30:19.478  [2024-12-09 16:40:48.406817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.795 ms
00:30:19.478  [2024-12-09 16:40:48.406859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.478  [2024-12-09 16:40:48.407621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.478  [2024-12-09 16:40:48.407763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:30:19.478  [2024-12-09 16:40:48.407868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.606 ms
00:30:19.478  [2024-12-09 16:40:48.407972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.478  [2024-12-09 16:40:48.489936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.478  [2024-12-09 16:40:48.490001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:30:19.478  [2024-12-09 16:40:48.490043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 81.851 ms
00:30:19.478  [2024-12-09 16:40:48.490056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.478  [2024-12-09 16:40:48.500326] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:30:19.478  [2024-12-09 16:40:48.502683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.478  [2024-12-09 16:40:48.502716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:30:19.478  [2024-12-09 16:40:48.502731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.597 ms
00:30:19.478  [2024-12-09 16:40:48.502742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.478  [2024-12-09 16:40:48.502825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.478  [2024-12-09 16:40:48.502839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:30:19.478  [2024-12-09 16:40:48.502855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:30:19.478  [2024-12-09 16:40:48.502867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.478  [2024-12-09 16:40:48.504432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.478  [2024-12-09 16:40:48.504596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:30:19.478  [2024-12-09 16:40:48.504618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.486 ms
00:30:19.478  [2024-12-09 16:40:48.504631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.478  [2024-12-09 16:40:48.504672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.478  [2024-12-09 16:40:48.504685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:30:19.478  [2024-12-09 16:40:48.504699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:30:19.478  [2024-12-09 16:40:48.504711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.478  [2024-12-09 16:40:48.504759] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:30:19.478  [2024-12-09 16:40:48.504774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.478  [2024-12-09 16:40:48.504786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:30:19.478  [2024-12-09 16:40:48.504798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:30:19.478  [2024-12-09 16:40:48.504810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.478  [2024-12-09 16:40:48.539452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.479  [2024-12-09 16:40:48.539491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:30:19.479  [2024-12-09 16:40:48.539514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.674 ms
00:30:19.479  [2024-12-09 16:40:48.539525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.479  [2024-12-09 16:40:48.539597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:19.479  [2024-12-09 16:40:48.539611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:30:19.479  [2024-12-09 16:40:48.539624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:30:19.479  [2024-12-09 16:40:48.539635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:19.479  [2024-12-09 16:40:48.540776] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.264 ms, result 0
00:30:20.858  
[2024-12-09T16:40:50.974Z] Copying: 1228/1048576 [kB] (1228 kBps)
[2024-12-09T16:40:51.911Z] Copying: 9288/1048576 [kB] (8060 kBps)
[2024-12-09T16:40:52.849Z] Copying: 38/1024 [MB] (29 MBps)
[2024-12-09T16:40:53.786Z] Copying: 68/1024 [MB] (29 MBps)
[2024-12-09T16:40:55.165Z] Copying: 98/1024 [MB] (29 MBps)
[2024-12-09T16:40:56.101Z] Copying: 128/1024 [MB] (30 MBps)
[2024-12-09T16:40:57.037Z] Copying: 158/1024 [MB] (30 MBps)
[2024-12-09T16:40:57.989Z] Copying: 188/1024 [MB] (30 MBps)
[2024-12-09T16:40:59.008Z] Copying: 218/1024 [MB] (30 MBps)
[2024-12-09T16:40:59.946Z] Copying: 248/1024 [MB] (29 MBps)
[2024-12-09T16:41:00.883Z] Copying: 278/1024 [MB] (30 MBps)
[2024-12-09T16:41:01.821Z] Copying: 308/1024 [MB] (30 MBps)
[2024-12-09T16:41:02.758Z] Copying: 339/1024 [MB] (30 MBps)
[2024-12-09T16:41:04.137Z] Copying: 369/1024 [MB] (30 MBps)
[2024-12-09T16:41:05.077Z] Copying: 399/1024 [MB] (30 MBps)
[2024-12-09T16:41:06.016Z] Copying: 429/1024 [MB] (29 MBps)
[2024-12-09T16:41:06.955Z] Copying: 459/1024 [MB] (30 MBps)
[2024-12-09T16:41:07.897Z] Copying: 489/1024 [MB] (30 MBps)
[2024-12-09T16:41:08.837Z] Copying: 520/1024 [MB] (30 MBps)
[2024-12-09T16:41:09.776Z] Copying: 550/1024 [MB] (30 MBps)
[2024-12-09T16:41:11.157Z] Copying: 581/1024 [MB] (30 MBps)
[2024-12-09T16:41:11.727Z] Copying: 612/1024 [MB] (30 MBps)
[2024-12-09T16:41:13.108Z] Copying: 642/1024 [MB] (30 MBps)
[2024-12-09T16:41:14.048Z] Copying: 672/1024 [MB] (30 MBps)
[2024-12-09T16:41:14.987Z] Copying: 702/1024 [MB] (30 MBps)
[2024-12-09T16:41:15.925Z] Copying: 732/1024 [MB] (30 MBps)
[2024-12-09T16:41:16.864Z] Copying: 762/1024 [MB] (29 MBps)
[2024-12-09T16:41:17.802Z] Copying: 792/1024 [MB] (29 MBps)
[2024-12-09T16:41:18.742Z] Copying: 822/1024 [MB] (30 MBps)
[2024-12-09T16:41:20.123Z] Copying: 852/1024 [MB] (30 MBps)
[2024-12-09T16:41:21.062Z] Copying: 883/1024 [MB] (30 MBps)
[2024-12-09T16:41:22.002Z] Copying: 913/1024 [MB] (30 MBps)
[2024-12-09T16:41:22.941Z] Copying: 943/1024 [MB] (30 MBps)
[2024-12-09T16:41:23.880Z] Copying: 973/1024 [MB] (30 MBps)
[2024-12-09T16:41:24.448Z] Copying: 1003/1024 [MB] (30 MBps)
[2024-12-09T16:41:25.018Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-12-09 16:41:24.833527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:55.839  [2024-12-09 16:41:24.833593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:30:55.839  [2024-12-09 16:41:24.833614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:30:55.839  [2024-12-09 16:41:24.833628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:55.839  [2024-12-09 16:41:24.833660] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:30:55.839  [2024-12-09 16:41:24.838812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:55.839  [2024-12-09 16:41:24.838859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:30:55.839  [2024-12-09 16:41:24.838876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.136 ms
00:30:55.839  [2024-12-09 16:41:24.838890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:55.839  [2024-12-09 16:41:24.839133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:55.839  [2024-12-09 16:41:24.839156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:30:55.839  [2024-12-09 16:41:24.839171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.187 ms
00:30:55.839  [2024-12-09 16:41:24.839183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:55.839  [2024-12-09 16:41:24.853806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:55.839  [2024-12-09 16:41:24.853862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:30:55.839  [2024-12-09 16:41:24.853881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.623 ms
00:30:55.839  [2024-12-09 16:41:24.853912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:55.839  [2024-12-09 16:41:24.860260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:55.839  [2024-12-09 16:41:24.860309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:30:55.839  [2024-12-09 16:41:24.860333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.311 ms
00:30:55.839  [2024-12-09 16:41:24.860346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:55.839  [2024-12-09 16:41:24.895744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:55.839  [2024-12-09 16:41:24.895947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:30:55.839  [2024-12-09 16:41:24.895972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.397 ms
00:30:55.839  [2024-12-09 16:41:24.895985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:55.839  [2024-12-09 16:41:24.915066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:55.839  [2024-12-09 16:41:24.915240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:30:55.839  [2024-12-09 16:41:24.915264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.065 ms
00:30:55.839  [2024-12-09 16:41:24.915292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:55.839  [2024-12-09 16:41:24.917126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:55.839  [2024-12-09 16:41:24.917167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:30:55.839  [2024-12-09 16:41:24.917182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.789 ms
00:30:55.839  [2024-12-09 16:41:24.917202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:55.839  [2024-12-09 16:41:24.952373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:55.839  [2024-12-09 16:41:24.952539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:30:55.839  [2024-12-09 16:41:24.952562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.206 ms
00:30:55.839  [2024-12-09 16:41:24.952590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:55.839  [2024-12-09 16:41:24.986147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:55.839  [2024-12-09 16:41:24.986240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:30:55.839  [2024-12-09 16:41:24.986255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.568 ms
00:30:55.839  [2024-12-09 16:41:24.986265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.099  [2024-12-09 16:41:25.020135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:56.099  [2024-12-09 16:41:25.020173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:30:56.099  [2024-12-09 16:41:25.020187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.883 ms
00:30:56.099  [2024-12-09 16:41:25.020199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.099  [2024-12-09 16:41:25.053310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:56.099  [2024-12-09 16:41:25.053470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:30:56.099  [2024-12-09 16:41:25.053509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.083 ms
00:30:56.099  [2024-12-09 16:41:25.053521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.099  [2024-12-09 16:41:25.053562] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:30:56.099  [2024-12-09 16:41:25.053580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:30:56.099  [2024-12-09 16:41:25.053595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:     1536 / 261120 	wr_cnt: 1	state: open
00:30:56.099  [2024-12-09 16:41:25.053609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.053990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.099  [2024-12-09 16:41:25.054225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:30:56.100  [2024-12-09 16:41:25.054868] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:30:56.100  [2024-12-09 16:41:25.054880] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         b49a9023-73d5-44b8-8ac1-3392f825704d
00:30:56.100  [2024-12-09 16:41:25.054893] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    262656
00:30:56.100  [2024-12-09 16:41:25.054915] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        154304
00:30:56.100  [2024-12-09 16:41:25.054932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         152320
00:30:56.100  [2024-12-09 16:41:25.054945] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.0130
00:30:56.100  [2024-12-09 16:41:25.054956] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:30:56.100  [2024-12-09 16:41:25.054981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:30:56.100  [2024-12-09 16:41:25.054992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:30:56.100  [2024-12-09 16:41:25.055005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:30:56.100  [2024-12-09 16:41:25.055017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:30:56.100  [2024-12-09 16:41:25.055028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:56.100  [2024-12-09 16:41:25.055040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:30:56.100  [2024-12-09 16:41:25.055053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.470 ms
00:30:56.100  [2024-12-09 16:41:25.055065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.100  [2024-12-09 16:41:25.074075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:56.100  [2024-12-09 16:41:25.074110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:30:56.100  [2024-12-09 16:41:25.074124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.999 ms
00:30:56.100  [2024-12-09 16:41:25.074151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.100  [2024-12-09 16:41:25.074709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:56.100  [2024-12-09 16:41:25.074726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:30:56.100  [2024-12-09 16:41:25.074739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.534 ms
00:30:56.100  [2024-12-09 16:41:25.074749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.100  [2024-12-09 16:41:25.124375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.100  [2024-12-09 16:41:25.124411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:30:56.100  [2024-12-09 16:41:25.124426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.100  [2024-12-09 16:41:25.124438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.100  [2024-12-09 16:41:25.124489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.100  [2024-12-09 16:41:25.124502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:30:56.100  [2024-12-09 16:41:25.124513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.100  [2024-12-09 16:41:25.124524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.100  [2024-12-09 16:41:25.124600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.100  [2024-12-09 16:41:25.124613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:30:56.100  [2024-12-09 16:41:25.124624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.100  [2024-12-09 16:41:25.124635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.100  [2024-12-09 16:41:25.124654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.100  [2024-12-09 16:41:25.124666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:30:56.100  [2024-12-09 16:41:25.124678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.100  [2024-12-09 16:41:25.124688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.100  [2024-12-09 16:41:25.239513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.100  [2024-12-09 16:41:25.239565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:30:56.100  [2024-12-09 16:41:25.239582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.100  [2024-12-09 16:41:25.239594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.359  [2024-12-09 16:41:25.334915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.359  [2024-12-09 16:41:25.335146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:30:56.359  [2024-12-09 16:41:25.335171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.359  [2024-12-09 16:41:25.335199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.359  [2024-12-09 16:41:25.335294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.359  [2024-12-09 16:41:25.335313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:30:56.359  [2024-12-09 16:41:25.335326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.359  [2024-12-09 16:41:25.335338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.359  [2024-12-09 16:41:25.335378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.359  [2024-12-09 16:41:25.335391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:30:56.359  [2024-12-09 16:41:25.335403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.359  [2024-12-09 16:41:25.335415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.359  [2024-12-09 16:41:25.335547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.359  [2024-12-09 16:41:25.335562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:30:56.359  [2024-12-09 16:41:25.335579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.359  [2024-12-09 16:41:25.335591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.359  [2024-12-09 16:41:25.335634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.359  [2024-12-09 16:41:25.335649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:30:56.359  [2024-12-09 16:41:25.335661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.359  [2024-12-09 16:41:25.335673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.359  [2024-12-09 16:41:25.335713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.359  [2024-12-09 16:41:25.335726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:30:56.359  [2024-12-09 16:41:25.335743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.359  [2024-12-09 16:41:25.335755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.360  [2024-12-09 16:41:25.335801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:56.360  [2024-12-09 16:41:25.335814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:30:56.360  [2024-12-09 16:41:25.335827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:56.360  [2024-12-09 16:41:25.335839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:56.360  [2024-12-09 16:41:25.335993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.231 ms, result 0
00:30:57.299  
00:30:57.299  
00:30:57.299   16:41:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:30:59.257  /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK
00:30:59.257   16:41:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:30:59.257  [2024-12-09 16:41:28.060100] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:30:59.257  [2024-12-09 16:41:28.060377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84222 ]
00:30:59.257  [2024-12-09 16:41:28.241232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:59.257  [2024-12-09 16:41:28.359215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:30:59.828  [2024-12-09 16:41:28.705655] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:30:59.828  [2024-12-09 16:41:28.705735] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:30:59.828  [2024-12-09 16:41:28.867726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.828  [2024-12-09 16:41:28.867782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:30:59.828  [2024-12-09 16:41:28.867799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:30:59.828  [2024-12-09 16:41:28.867811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.828  [2024-12-09 16:41:28.867860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.828  [2024-12-09 16:41:28.867876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:30:59.828  [2024-12-09 16:41:28.867889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.028 ms
00:30:59.828  [2024-12-09 16:41:28.867916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.828  [2024-12-09 16:41:28.867958] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:30:59.828  [2024-12-09 16:41:28.868891] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:30:59.828  [2024-12-09 16:41:28.868934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.828  [2024-12-09 16:41:28.868946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:30:59.828  [2024-12-09 16:41:28.868959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.983 ms
00:30:59.828  [2024-12-09 16:41:28.868971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.828  [2024-12-09 16:41:28.870492] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:30:59.828  [2024-12-09 16:41:28.888387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.828  [2024-12-09 16:41:28.888581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:30:59.828  [2024-12-09 16:41:28.888623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.924 ms
00:30:59.828  [2024-12-09 16:41:28.888636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.828  [2024-12-09 16:41:28.888707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.828  [2024-12-09 16:41:28.888722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:30:59.828  [2024-12-09 16:41:28.888735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.024 ms
00:30:59.828  [2024-12-09 16:41:28.888747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.828  [2024-12-09 16:41:28.895698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.828  [2024-12-09 16:41:28.895862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:30:59.828  [2024-12-09 16:41:28.895900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.879 ms
00:30:59.828  [2024-12-09 16:41:28.895919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.828  [2024-12-09 16:41:28.896022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.828  [2024-12-09 16:41:28.896036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:30:59.828  [2024-12-09 16:41:28.896049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.061 ms
00:30:59.828  [2024-12-09 16:41:28.896062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.828  [2024-12-09 16:41:28.896109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.828  [2024-12-09 16:41:28.896123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:30:59.828  [2024-12-09 16:41:28.896135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:30:59.828  [2024-12-09 16:41:28.896147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.828  [2024-12-09 16:41:28.896180] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:30:59.828  [2024-12-09 16:41:28.900970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.829  [2024-12-09 16:41:28.901003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:30:59.829  [2024-12-09 16:41:28.901020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.803 ms
00:30:59.829  [2024-12-09 16:41:28.901031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.829  [2024-12-09 16:41:28.901068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.829  [2024-12-09 16:41:28.901081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:30:59.829  [2024-12-09 16:41:28.901093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:30:59.829  [2024-12-09 16:41:28.901104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.829  [2024-12-09 16:41:28.901185] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:30:59.829  [2024-12-09 16:41:28.901213] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:30:59.829  [2024-12-09 16:41:28.901249] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:30:59.829  [2024-12-09 16:41:28.901273] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:30:59.829  [2024-12-09 16:41:28.901362] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:30:59.829  [2024-12-09 16:41:28.901378] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:30:59.829  [2024-12-09 16:41:28.901393] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:30:59.829  [2024-12-09 16:41:28.901408] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:30:59.829  [2024-12-09 16:41:28.901422] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:30:59.829  [2024-12-09 16:41:28.901434] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:30:59.829  [2024-12-09 16:41:28.901446] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:30:59.829  [2024-12-09 16:41:28.901461] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:30:59.829  [2024-12-09 16:41:28.901472] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:30:59.829  [2024-12-09 16:41:28.901485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.829  [2024-12-09 16:41:28.901496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:30:59.829  [2024-12-09 16:41:28.901508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.303 ms
00:30:59.829  [2024-12-09 16:41:28.901519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.829  [2024-12-09 16:41:28.901594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.829  [2024-12-09 16:41:28.901607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:30:59.829  [2024-12-09 16:41:28.901619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.053 ms
00:30:59.829  [2024-12-09 16:41:28.901630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.829  [2024-12-09 16:41:28.901732] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:30:59.829  [2024-12-09 16:41:28.901748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:30:59.829  [2024-12-09 16:41:28.901761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:30:59.829  [2024-12-09 16:41:28.901773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:59.829  [2024-12-09 16:41:28.901785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:30:59.829  [2024-12-09 16:41:28.901796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:30:59.829  [2024-12-09 16:41:28.901807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:30:59.829  [2024-12-09 16:41:28.901820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:30:59.829  [2024-12-09 16:41:28.901831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:30:59.829  [2024-12-09 16:41:28.901842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:30:59.829  [2024-12-09 16:41:28.901853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:30:59.829  [2024-12-09 16:41:28.901866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:30:59.829  [2024-12-09 16:41:28.901877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:30:59.829  [2024-12-09 16:41:28.901899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:30:59.829  [2024-12-09 16:41:28.902101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:30:59.829  [2024-12-09 16:41:28.902153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:59.829  [2024-12-09 16:41:28.902191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:30:59.829  [2024-12-09 16:41:28.902225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:30:59.829  [2024-12-09 16:41:28.902259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:59.829  [2024-12-09 16:41:28.902293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:30:59.829  [2024-12-09 16:41:28.902327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:30:59.829  [2024-12-09 16:41:28.902361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:59.829  [2024-12-09 16:41:28.902452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:30:59.829  [2024-12-09 16:41:28.902491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:30:59.829  [2024-12-09 16:41:28.902525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:59.829  [2024-12-09 16:41:28.902559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:30:59.829  [2024-12-09 16:41:28.902594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:30:59.829  [2024-12-09 16:41:28.902628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:59.829  [2024-12-09 16:41:28.902662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:30:59.829  [2024-12-09 16:41:28.902818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:30:59.829  [2024-12-09 16:41:28.902852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:59.829  [2024-12-09 16:41:28.902886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:30:59.829  [2024-12-09 16:41:28.902947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:30:59.829  [2024-12-09 16:41:28.902982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:30:59.829  [2024-12-09 16:41:28.903065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:30:59.829  [2024-12-09 16:41:28.903105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:30:59.829  [2024-12-09 16:41:28.903140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:30:59.829  [2024-12-09 16:41:28.903357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:30:59.829  [2024-12-09 16:41:28.903376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:30:59.829  [2024-12-09 16:41:28.903387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:59.829  [2024-12-09 16:41:28.903398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:30:59.829  [2024-12-09 16:41:28.903409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:30:59.829  [2024-12-09 16:41:28.903420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:59.829  [2024-12-09 16:41:28.903432] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:30:59.829  [2024-12-09 16:41:28.903445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:30:59.829  [2024-12-09 16:41:28.903457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:30:59.829  [2024-12-09 16:41:28.903468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:59.829  [2024-12-09 16:41:28.903480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:30:59.829  [2024-12-09 16:41:28.903492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:30:59.829  [2024-12-09 16:41:28.903503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:30:59.829  [2024-12-09 16:41:28.903515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:30:59.829  [2024-12-09 16:41:28.903525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:30:59.829  [2024-12-09 16:41:28.903537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:30:59.829  [2024-12-09 16:41:28.903551] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:30:59.829  [2024-12-09 16:41:28.903566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:30:59.829  [2024-12-09 16:41:28.903586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:30:59.829  [2024-12-09 16:41:28.903599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:30:59.829  [2024-12-09 16:41:28.903612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:30:59.829  [2024-12-09 16:41:28.903624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:30:59.829  [2024-12-09 16:41:28.903636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:30:59.829  [2024-12-09 16:41:28.903649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:30:59.830  [2024-12-09 16:41:28.903661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:30:59.830  [2024-12-09 16:41:28.903673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:30:59.830  [2024-12-09 16:41:28.903685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:30:59.830  [2024-12-09 16:41:28.903698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:30:59.830  [2024-12-09 16:41:28.903710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:30:59.830  [2024-12-09 16:41:28.903722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:30:59.830  [2024-12-09 16:41:28.903734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:30:59.830  [2024-12-09 16:41:28.903747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:30:59.830  [2024-12-09 16:41:28.903759] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:30:59.830  [2024-12-09 16:41:28.903773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:30:59.830  [2024-12-09 16:41:28.903786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:30:59.830  [2024-12-09 16:41:28.903798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:30:59.830  [2024-12-09 16:41:28.903810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:30:59.830  [2024-12-09 16:41:28.903822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:30:59.830  [2024-12-09 16:41:28.903837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.830  [2024-12-09 16:41:28.903851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:30:59.830  [2024-12-09 16:41:28.903863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.162 ms
00:30:59.830  [2024-12-09 16:41:28.903874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.830  [2024-12-09 16:41:28.941358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.830  [2024-12-09 16:41:28.941538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:30:59.830  [2024-12-09 16:41:28.941705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 37.467 ms
00:30:59.830  [2024-12-09 16:41:28.941756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:59.830  [2024-12-09 16:41:28.941859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:59.830  [2024-12-09 16:41:28.941908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:30:59.830  [2024-12-09 16:41:28.941999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.051 ms
00:30:59.830  [2024-12-09 16:41:28.942038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.019397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.019546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:31:00.091  [2024-12-09 16:41:29.019646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 77.384 ms
00:31:00.091  [2024-12-09 16:41:29.019688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.019757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.019796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:31:00.091  [2024-12-09 16:41:29.019839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:31:00.091  [2024-12-09 16:41:29.019874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.020568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.020694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:31:00.091  [2024-12-09 16:41:29.020779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.448 ms
00:31:00.091  [2024-12-09 16:41:29.020819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.020992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.021173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:31:00.091  [2024-12-09 16:41:29.021228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.117 ms
00:31:00.091  [2024-12-09 16:41:29.021264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.040358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.040514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:31:00.091  [2024-12-09 16:41:29.040645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.067 ms
00:31:00.091  [2024-12-09 16:41:29.040688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.059910] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:31:00.091  [2024-12-09 16:41:29.060087] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:31:00.091  [2024-12-09 16:41:29.060248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.060287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:31:00.091  [2024-12-09 16:41:29.060324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.445 ms
00:31:00.091  [2024-12-09 16:41:29.060392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.088662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.090605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:31:00.091  [2024-12-09 16:41:29.090632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.214 ms
00:31:00.091  [2024-12-09 16:41:29.090647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.108307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.108348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:31:00.091  [2024-12-09 16:41:29.108362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.547 ms
00:31:00.091  [2024-12-09 16:41:29.108373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.125416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.125453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:31:00.091  [2024-12-09 16:41:29.125467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.028 ms
00:31:00.091  [2024-12-09 16:41:29.125477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.126297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.126343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:31:00.091  [2024-12-09 16:41:29.126363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.703 ms
00:31:00.091  [2024-12-09 16:41:29.126375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.208855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.208926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:31:00.091  [2024-12-09 16:41:29.208952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 82.586 ms
00:31:00.091  [2024-12-09 16:41:29.208964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.219207] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:31:00.091  [2024-12-09 16:41:29.221435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.221466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:31:00.091  [2024-12-09 16:41:29.221480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.442 ms
00:31:00.091  [2024-12-09 16:41:29.221509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.091  [2024-12-09 16:41:29.221589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.091  [2024-12-09 16:41:29.221604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:31:00.091  [2024-12-09 16:41:29.221622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:31:00.092  [2024-12-09 16:41:29.221633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.092  [2024-12-09 16:41:29.222563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.092  [2024-12-09 16:41:29.222594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:31:00.092  [2024-12-09 16:41:29.222608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.883 ms
00:31:00.092  [2024-12-09 16:41:29.222619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.092  [2024-12-09 16:41:29.222651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.092  [2024-12-09 16:41:29.222665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:31:00.092  [2024-12-09 16:41:29.222677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:31:00.092  [2024-12-09 16:41:29.222689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.092  [2024-12-09 16:41:29.222733] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:31:00.092  [2024-12-09 16:41:29.222748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.092  [2024-12-09 16:41:29.222761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:31:00.092  [2024-12-09 16:41:29.222772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:31:00.092  [2024-12-09 16:41:29.222784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.092  [2024-12-09 16:41:29.257263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.092  [2024-12-09 16:41:29.257303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:31:00.092  [2024-12-09 16:41:29.257325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.512 ms
00:31:00.092  [2024-12-09 16:41:29.257336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.092  [2024-12-09 16:41:29.257408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:00.092  [2024-12-09 16:41:29.257422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:31:00.092  [2024-12-09 16:41:29.257434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.031 ms
00:31:00.092  [2024-12-09 16:41:29.257446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:00.092  [2024-12-09 16:41:29.258730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.191 ms, result 0
00:31:01.472  
[2024-12-09T16:41:31.596Z] Copying: 24/1024 [MB] (24 MBps)
[2024-12-09T16:41:32.534Z] Copying: 48/1024 [MB] (23 MBps)
[2024-12-09T16:41:33.473Z] Copying: 72/1024 [MB] (23 MBps)
[2024-12-09T16:41:34.853Z] Copying: 95/1024 [MB] (22 MBps)
[2024-12-09T16:41:35.792Z] Copying: 119/1024 [MB] (24 MBps)
[2024-12-09T16:41:36.733Z] Copying: 143/1024 [MB] (24 MBps)
[2024-12-09T16:41:37.673Z] Copying: 167/1024 [MB] (24 MBps)
[2024-12-09T16:41:38.613Z] Copying: 191/1024 [MB] (24 MBps)
[2024-12-09T16:41:39.552Z] Copying: 216/1024 [MB] (24 MBps)
[2024-12-09T16:41:40.490Z] Copying: 239/1024 [MB] (23 MBps)
[2024-12-09T16:41:41.869Z] Copying: 263/1024 [MB] (24 MBps)
[2024-12-09T16:41:42.807Z] Copying: 288/1024 [MB] (24 MBps)
[2024-12-09T16:41:43.745Z] Copying: 312/1024 [MB] (24 MBps)
[2024-12-09T16:41:44.683Z] Copying: 336/1024 [MB] (23 MBps)
[2024-12-09T16:41:45.624Z] Copying: 360/1024 [MB] (24 MBps)
[2024-12-09T16:41:46.562Z] Copying: 384/1024 [MB] (24 MBps)
[2024-12-09T16:41:47.501Z] Copying: 408/1024 [MB] (24 MBps)
[2024-12-09T16:41:48.440Z] Copying: 432/1024 [MB] (24 MBps)
[2024-12-09T16:41:49.820Z] Copying: 456/1024 [MB] (24 MBps)
[2024-12-09T16:41:50.759Z] Copying: 480/1024 [MB] (24 MBps)
[2024-12-09T16:41:51.697Z] Copying: 504/1024 [MB] (23 MBps)
[2024-12-09T16:41:52.633Z] Copying: 528/1024 [MB] (23 MBps)
[2024-12-09T16:41:53.572Z] Copying: 551/1024 [MB] (23 MBps)
[2024-12-09T16:41:54.510Z] Copying: 575/1024 [MB] (24 MBps)
[2024-12-09T16:41:55.451Z] Copying: 600/1024 [MB] (24 MBps)
[2024-12-09T16:41:56.448Z] Copying: 626/1024 [MB] (25 MBps)
[2024-12-09T16:41:57.829Z] Copying: 651/1024 [MB] (25 MBps)
[2024-12-09T16:41:58.769Z] Copying: 676/1024 [MB] (25 MBps)
[2024-12-09T16:41:59.709Z] Copying: 703/1024 [MB] (26 MBps)
[2024-12-09T16:42:00.648Z] Copying: 729/1024 [MB] (26 MBps)
[2024-12-09T16:42:01.586Z] Copying: 753/1024 [MB] (24 MBps)
[2024-12-09T16:42:02.525Z] Copying: 779/1024 [MB] (25 MBps)
[2024-12-09T16:42:03.464Z] Copying: 804/1024 [MB] (25 MBps)
[2024-12-09T16:42:04.845Z] Copying: 829/1024 [MB] (25 MBps)
[2024-12-09T16:42:05.413Z] Copying: 854/1024 [MB] (25 MBps)
[2024-12-09T16:42:06.794Z] Copying: 880/1024 [MB] (25 MBps)
[2024-12-09T16:42:07.733Z] Copying: 903/1024 [MB] (23 MBps)
[2024-12-09T16:42:08.671Z] Copying: 929/1024 [MB] (25 MBps)
[2024-12-09T16:42:09.610Z] Copying: 955/1024 [MB] (25 MBps)
[2024-12-09T16:42:10.549Z] Copying: 979/1024 [MB] (24 MBps)
[2024-12-09T16:42:11.489Z] Copying: 1005/1024 [MB] (25 MBps)
[2024-12-09T16:42:11.489Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-09 16:42:11.232944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.233068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:31:42.310  [2024-12-09 16:42:11.233118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:31:42.310  [2024-12-09 16:42:11.233172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.233240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:31:42.310  [2024-12-09 16:42:11.244844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.245073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:31:42.310  [2024-12-09 16:42:11.245219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.570 ms
00:31:42.310  [2024-12-09 16:42:11.245282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.245677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.245754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:31:42.310  [2024-12-09 16:42:11.245811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.312 ms
00:31:42.310  [2024-12-09 16:42:11.245964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.250681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.250845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:31:42.310  [2024-12-09 16:42:11.250987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.644 ms
00:31:42.310  [2024-12-09 16:42:11.251024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.258217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.258357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:31:42.310  [2024-12-09 16:42:11.258464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.166 ms
00:31:42.310  [2024-12-09 16:42:11.258506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.293957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.294112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:31:42.310  [2024-12-09 16:42:11.294230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.414 ms
00:31:42.310  [2024-12-09 16:42:11.294268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.314767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.314915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:31:42.310  [2024-12-09 16:42:11.314989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.473 ms
00:31:42.310  [2024-12-09 16:42:11.315025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.316963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.317081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:31:42.310  [2024-12-09 16:42:11.317156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.875 ms
00:31:42.310  [2024-12-09 16:42:11.317191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.353075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.353203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:31:42.310  [2024-12-09 16:42:11.353338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.899 ms
00:31:42.310  [2024-12-09 16:42:11.353375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.387659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.387777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:31:42.310  [2024-12-09 16:42:11.387859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.236 ms
00:31:42.310  [2024-12-09 16:42:11.387893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.420724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.420851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:31:42.310  [2024-12-09 16:42:11.420948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.817 ms
00:31:42.310  [2024-12-09 16:42:11.420984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.453671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.310  [2024-12-09 16:42:11.453831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:31:42.310  [2024-12-09 16:42:11.453925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.645 ms
00:31:42.310  [2024-12-09 16:42:11.453962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.310  [2024-12-09 16:42:11.454016] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:31:42.310  [2024-12-09 16:42:11.454064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:31:42.310  [2024-12-09 16:42:11.454118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:     1536 / 261120 	wr_cnt: 1	state: open
00:31:42.310  [2024-12-09 16:42:11.454213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.310  [2024-12-09 16:42:11.454263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.310  [2024-12-09 16:42:11.454311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.310  [2024-12-09 16:42:11.454358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.310  [2024-12-09 16:42:11.454442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.310  [2024-12-09 16:42:11.454523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.310  [2024-12-09 16:42:11.454575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.310  [2024-12-09 16:42:11.454651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.310  [2024-12-09 16:42:11.454700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.454747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.454835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.454884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.454945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.455995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.311  [2024-12-09 16:42:11.456160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:31:42.312  [2024-12-09 16:42:11.456476] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:31:42.312  [2024-12-09 16:42:11.456486] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         b49a9023-73d5-44b8-8ac1-3392f825704d
00:31:42.312  [2024-12-09 16:42:11.456497] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    262656
00:31:42.312  [2024-12-09 16:42:11.456506] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:31:42.312  [2024-12-09 16:42:11.456516] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:31:42.312  [2024-12-09 16:42:11.456526] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:31:42.312  [2024-12-09 16:42:11.456546] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:31:42.312  [2024-12-09 16:42:11.456556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:31:42.312  [2024-12-09 16:42:11.456566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:31:42.312  [2024-12-09 16:42:11.456575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:31:42.312  [2024-12-09 16:42:11.456584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:31:42.312  [2024-12-09 16:42:11.456595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.312  [2024-12-09 16:42:11.456605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:31:42.312  [2024-12-09 16:42:11.456616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.583 ms
00:31:42.312  [2024-12-09 16:42:11.456630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.312  [2024-12-09 16:42:11.475494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.312  [2024-12-09 16:42:11.475525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:31:42.312  [2024-12-09 16:42:11.475536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.855 ms
00:31:42.312  [2024-12-09 16:42:11.475546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.312  [2024-12-09 16:42:11.476055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:31:42.312  [2024-12-09 16:42:11.476093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:31:42.312  [2024-12-09 16:42:11.476104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.489 ms
00:31:42.312  [2024-12-09 16:42:11.476113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.572  [2024-12-09 16:42:11.524155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.572  [2024-12-09 16:42:11.524197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:31:42.572  [2024-12-09 16:42:11.524209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.572  [2024-12-09 16:42:11.524219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.572  [2024-12-09 16:42:11.524265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.572  [2024-12-09 16:42:11.524280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:31:42.572  [2024-12-09 16:42:11.524290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.572  [2024-12-09 16:42:11.524300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.572  [2024-12-09 16:42:11.524357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.572  [2024-12-09 16:42:11.524369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:31:42.572  [2024-12-09 16:42:11.524378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.572  [2024-12-09 16:42:11.524388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.572  [2024-12-09 16:42:11.524403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.572  [2024-12-09 16:42:11.524413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:31:42.572  [2024-12-09 16:42:11.524426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.572  [2024-12-09 16:42:11.524435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.572  [2024-12-09 16:42:11.639825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.572  [2024-12-09 16:42:11.640164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:31:42.572  [2024-12-09 16:42:11.640187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.572  [2024-12-09 16:42:11.640198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.572  [2024-12-09 16:42:11.734341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.572  [2024-12-09 16:42:11.734386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:31:42.572  [2024-12-09 16:42:11.734399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.572  [2024-12-09 16:42:11.734409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.572  [2024-12-09 16:42:11.734483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.572  [2024-12-09 16:42:11.734495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:31:42.572  [2024-12-09 16:42:11.734504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.572  [2024-12-09 16:42:11.734514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.572  [2024-12-09 16:42:11.734548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.572  [2024-12-09 16:42:11.734559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:31:42.572  [2024-12-09 16:42:11.734569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.572  [2024-12-09 16:42:11.734582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.572  [2024-12-09 16:42:11.734685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.572  [2024-12-09 16:42:11.734697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:31:42.572  [2024-12-09 16:42:11.734707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.572  [2024-12-09 16:42:11.734717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.572  [2024-12-09 16:42:11.734750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.573  [2024-12-09 16:42:11.734761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:31:42.573  [2024-12-09 16:42:11.734771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.573  [2024-12-09 16:42:11.734780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.573  [2024-12-09 16:42:11.734819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.573  [2024-12-09 16:42:11.734830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:31:42.573  [2024-12-09 16:42:11.734839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.573  [2024-12-09 16:42:11.734849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.573  [2024-12-09 16:42:11.734890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:31:42.573  [2024-12-09 16:42:11.734926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:31:42.573  [2024-12-09 16:42:11.734953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:31:42.573  [2024-12-09 16:42:11.734967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:31:42.573  [2024-12-09 16:42:11.735107] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 502.998 ms, result 0
00:31:43.951  
00:31:43.951  
00:31:43.951   16:42:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5
00:31:45.330  /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK
00:31:45.330   16:42:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT
00:31:45.330   16:42:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill
00:31:45.330   16:42:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:31:45.330   16:42:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:31:45.590   16:42:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2
00:31:45.590   16:42:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:31:45.590   16:42:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5
00:31:45.590  Process with pid 82345 is not found
00:31:45.590   16:42:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82345
00:31:45.590   16:42:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82345 ']'
00:31:45.590   16:42:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 82345
00:31:45.590  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82345) - No such process
00:31:45.590   16:42:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 82345 is not found'
00:31:45.590   16:42:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd
00:31:45.849   16:42:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm
00:31:45.849   16:42:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files
00:31:45.849  Remove shared memory files
00:31:45.849   16:42:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f
00:31:45.849   16:42:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f
00:31:45.849   16:42:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f
00:31:45.849   16:42:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:31:45.849   16:42:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f
00:31:45.849  ************************************
00:31:45.849  END TEST ftl_dirty_shutdown
00:31:45.849  ************************************
00:31:45.849  
00:31:45.849  real	3m46.355s
00:31:45.849  user	4m14.140s
00:31:45.849  sys	0m40.020s
00:31:45.849   16:42:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:45.849   16:42:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x
00:31:46.108   16:42:15 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0
00:31:46.108   16:42:15 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:31:46.108   16:42:15 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:46.108   16:42:15 ftl -- common/autotest_common.sh@10 -- # set +x
00:31:46.108  ************************************
00:31:46.108  START TEST ftl_upgrade_shutdown
00:31:46.108  ************************************
00:31:46.108   16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0
00:31:46.108  * Looking for test storage...
00:31:46.108  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:31:46.108    16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:31:46.108     16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version
00:31:46.108     16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-:
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-:
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 ))
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:31:46.368     16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1
00:31:46.368     16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1
00:31:46.368     16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:31:46.368     16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1
00:31:46.368     16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2
00:31:46.368     16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2
00:31:46.368     16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:31:46.368     16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:31:46.368  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:46.368  		--rc genhtml_branch_coverage=1
00:31:46.368  		--rc genhtml_function_coverage=1
00:31:46.368  		--rc genhtml_legend=1
00:31:46.368  		--rc geninfo_all_blocks=1
00:31:46.368  		--rc geninfo_unexecuted_blocks=1
00:31:46.368  		
00:31:46.368  		'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:31:46.368  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:46.368  		--rc genhtml_branch_coverage=1
00:31:46.368  		--rc genhtml_function_coverage=1
00:31:46.368  		--rc genhtml_legend=1
00:31:46.368  		--rc geninfo_all_blocks=1
00:31:46.368  		--rc geninfo_unexecuted_blocks=1
00:31:46.368  		
00:31:46.368  		'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:31:46.368  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:46.368  		--rc genhtml_branch_coverage=1
00:31:46.368  		--rc genhtml_function_coverage=1
00:31:46.368  		--rc genhtml_legend=1
00:31:46.368  		--rc geninfo_all_blocks=1
00:31:46.368  		--rc geninfo_unexecuted_blocks=1
00:31:46.368  		
00:31:46.368  		'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:31:46.368  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:46.368  		--rc genhtml_branch_coverage=1
00:31:46.368  		--rc genhtml_function_coverage=1
00:31:46.368  		--rc genhtml_legend=1
00:31:46.368  		--rc geninfo_all_blocks=1
00:31:46.368  		--rc geninfo_unexecuted_blocks=1
00:31:46.368  		
00:31:46.368  		'
00:31:46.368   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:31:46.368      16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh
00:31:46.368     16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:31:46.368     16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid=
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:31:46.368    16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:31:46.368   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:31:46.368   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl
00:31:46.368   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl
00:31:46.368   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0
00:31:46.368   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0
00:31:46.368   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480
00:31:46.368   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480
00:31:46.368   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev=
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev=
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84759
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84759
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]'
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84759 ']'
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:46.369  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:46.369   16:42:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:31:46.369  [2024-12-09 16:42:15.478098] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:31:46.369  [2024-12-09 16:42:15.478405] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84759 ]
00:31:46.628  [2024-12-09 16:42:15.661039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:46.628  [2024-12-09 16:42:15.765775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT')
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]]
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]]
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]]
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]]
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]]
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:31:47.566   16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]]
00:31:47.566    16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480
00:31:47.566    16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base
00:31:47.566    16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:31:47.566    16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480
00:31:47.566    16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev
00:31:47.566     16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0
00:31:47.825    16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1
00:31:47.825    16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size
00:31:47.825     16:42:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1
00:31:47.825     16:42:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1
00:31:47.825     16:42:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:31:47.825     16:42:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:31:47.825     16:42:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:31:47.825      16:42:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1
00:31:48.085     16:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:31:48.085    {
00:31:48.085      "name": "basen1",
00:31:48.085      "aliases": [
00:31:48.085        "9f743b49-65aa-4409-a7f9-be8b91c7a9ad"
00:31:48.085      ],
00:31:48.085      "product_name": "NVMe disk",
00:31:48.085      "block_size": 4096,
00:31:48.085      "num_blocks": 1310720,
00:31:48.085      "uuid": "9f743b49-65aa-4409-a7f9-be8b91c7a9ad",
00:31:48.085      "numa_id": -1,
00:31:48.085      "assigned_rate_limits": {
00:31:48.085        "rw_ios_per_sec": 0,
00:31:48.085        "rw_mbytes_per_sec": 0,
00:31:48.085        "r_mbytes_per_sec": 0,
00:31:48.085        "w_mbytes_per_sec": 0
00:31:48.085      },
00:31:48.085      "claimed": true,
00:31:48.085      "claim_type": "read_many_write_one",
00:31:48.085      "zoned": false,
00:31:48.085      "supported_io_types": {
00:31:48.085        "read": true,
00:31:48.085        "write": true,
00:31:48.085        "unmap": true,
00:31:48.085        "flush": true,
00:31:48.085        "reset": true,
00:31:48.085        "nvme_admin": true,
00:31:48.085        "nvme_io": true,
00:31:48.085        "nvme_io_md": false,
00:31:48.085        "write_zeroes": true,
00:31:48.085        "zcopy": false,
00:31:48.085        "get_zone_info": false,
00:31:48.085        "zone_management": false,
00:31:48.085        "zone_append": false,
00:31:48.085        "compare": true,
00:31:48.085        "compare_and_write": false,
00:31:48.085        "abort": true,
00:31:48.085        "seek_hole": false,
00:31:48.085        "seek_data": false,
00:31:48.085        "copy": true,
00:31:48.085        "nvme_iov_md": false
00:31:48.085      },
00:31:48.085      "driver_specific": {
00:31:48.085        "nvme": [
00:31:48.085          {
00:31:48.085            "pci_address": "0000:00:11.0",
00:31:48.085            "trid": {
00:31:48.085              "trtype": "PCIe",
00:31:48.085              "traddr": "0000:00:11.0"
00:31:48.085            },
00:31:48.085            "ctrlr_data": {
00:31:48.085              "cntlid": 0,
00:31:48.085              "vendor_id": "0x1b36",
00:31:48.085              "model_number": "QEMU NVMe Ctrl",
00:31:48.085              "serial_number": "12341",
00:31:48.085              "firmware_revision": "8.0.0",
00:31:48.085              "subnqn": "nqn.2019-08.org.qemu:12341",
00:31:48.085              "oacs": {
00:31:48.085                "security": 0,
00:31:48.085                "format": 1,
00:31:48.085                "firmware": 0,
00:31:48.085                "ns_manage": 1
00:31:48.085              },
00:31:48.085              "multi_ctrlr": false,
00:31:48.085              "ana_reporting": false
00:31:48.085            },
00:31:48.085            "vs": {
00:31:48.085              "nvme_version": "1.4"
00:31:48.085            },
00:31:48.085            "ns_data": {
00:31:48.085              "id": 1,
00:31:48.085              "can_share": false
00:31:48.085            }
00:31:48.085          }
00:31:48.085        ],
00:31:48.085        "mp_policy": "active_passive"
00:31:48.085      }
00:31:48.085    }
00:31:48.085  ]'
00:31:48.085      16:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:31:48.085     16:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:31:48.085      16:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:31:48.085     16:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720
00:31:48.085     16:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:31:48.085     16:42:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120
00:31:48.085    16:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120
00:31:48.085    16:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]]
00:31:48.085    16:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols
00:31:48.085     16:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:31:48.085     16:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:31:48.345    16:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=872e902a-e743-4837-930d-d0acc78f5762
00:31:48.345    16:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores
00:31:48.345    16:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 872e902a-e743-4837-930d-d0acc78f5762
00:31:48.604     16:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs
00:31:48.863    16:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=96bf0b95-3767-4236-a090-a4db8771635f
00:31:48.863    16:42:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 96bf0b95-3767-4236-a090-a4db8771635f
00:31:49.123   16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=730c1604-070d-4441-be38-25c082dc12a2
00:31:49.123   16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 730c1604-070d-4441-be38-25c082dc12a2 ]]
00:31:49.123    16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 730c1604-070d-4441-be38-25c082dc12a2 5120
00:31:49.123    16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache
00:31:49.123    16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:31:49.123    16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=730c1604-070d-4441-be38-25c082dc12a2
00:31:49.123    16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120
00:31:49.123     16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 730c1604-070d-4441-be38-25c082dc12a2
00:31:49.123     16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=730c1604-070d-4441-be38-25c082dc12a2
00:31:49.123     16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:31:49.123     16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:31:49.123     16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:31:49.123      16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 730c1604-070d-4441-be38-25c082dc12a2
00:31:49.123     16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:31:49.123    {
00:31:49.123      "name": "730c1604-070d-4441-be38-25c082dc12a2",
00:31:49.123      "aliases": [
00:31:49.123        "lvs/basen1p0"
00:31:49.123      ],
00:31:49.123      "product_name": "Logical Volume",
00:31:49.123      "block_size": 4096,
00:31:49.123      "num_blocks": 5242880,
00:31:49.123      "uuid": "730c1604-070d-4441-be38-25c082dc12a2",
00:31:49.123      "assigned_rate_limits": {
00:31:49.123        "rw_ios_per_sec": 0,
00:31:49.123        "rw_mbytes_per_sec": 0,
00:31:49.123        "r_mbytes_per_sec": 0,
00:31:49.123        "w_mbytes_per_sec": 0
00:31:49.123      },
00:31:49.123      "claimed": false,
00:31:49.123      "zoned": false,
00:31:49.123      "supported_io_types": {
00:31:49.123        "read": true,
00:31:49.123        "write": true,
00:31:49.123        "unmap": true,
00:31:49.123        "flush": false,
00:31:49.123        "reset": true,
00:31:49.123        "nvme_admin": false,
00:31:49.123        "nvme_io": false,
00:31:49.123        "nvme_io_md": false,
00:31:49.123        "write_zeroes": true,
00:31:49.123        "zcopy": false,
00:31:49.123        "get_zone_info": false,
00:31:49.123        "zone_management": false,
00:31:49.123        "zone_append": false,
00:31:49.123        "compare": false,
00:31:49.123        "compare_and_write": false,
00:31:49.123        "abort": false,
00:31:49.123        "seek_hole": true,
00:31:49.123        "seek_data": true,
00:31:49.123        "copy": false,
00:31:49.123        "nvme_iov_md": false
00:31:49.123      },
00:31:49.123      "driver_specific": {
00:31:49.123        "lvol": {
00:31:49.123          "lvol_store_uuid": "96bf0b95-3767-4236-a090-a4db8771635f",
00:31:49.123          "base_bdev": "basen1",
00:31:49.123          "thin_provision": true,
00:31:49.123          "num_allocated_clusters": 0,
00:31:49.123          "snapshot": false,
00:31:49.123          "clone": false,
00:31:49.123          "esnap_clone": false
00:31:49.123        }
00:31:49.123      }
00:31:49.123    }
00:31:49.123  ]'
00:31:49.123      16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:31:49.123     16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:31:49.123      16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:31:49.382     16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880
00:31:49.382     16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480
00:31:49.382     16:42:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480
00:31:49.383    16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024
00:31:49.383    16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev
00:31:49.383     16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0
00:31:49.642    16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1
00:31:49.642    16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]]
00:31:49.642    16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1
00:31:49.642   16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0
00:31:49.642   16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]]
00:31:49.642   16:42:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 730c1604-070d-4441-be38-25c082dc12a2 -c cachen1p0 --l2p_dram_limit 2
00:31:49.917  [2024-12-09 16:42:18.939526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.917  [2024-12-09 16:42:18.939574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Check configuration
00:31:49.917  [2024-12-09 16:42:18.939592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:31:49.917  [2024-12-09 16:42:18.939603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.917  [2024-12-09 16:42:18.939666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.917  [2024-12-09 16:42:18.939678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:31:49.917  [2024-12-09 16:42:18.939698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.043 ms
00:31:49.917  [2024-12-09 16:42:18.939708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.917  [2024-12-09 16:42:18.939730] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache
00:31:49.917  [2024-12-09 16:42:18.940807] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device
00:31:49.917  [2024-12-09 16:42:18.940842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.917  [2024-12-09 16:42:18.940854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:31:49.917  [2024-12-09 16:42:18.940867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.113 ms
00:31:49.917  [2024-12-09 16:42:18.940877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.917  [2024-12-09 16:42:18.940965] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 8acfe7ce-843d-4f42-a2af-cd7dd442e3e7
00:31:49.917  [2024-12-09 16:42:18.942547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.917  [2024-12-09 16:42:18.942685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Default-initialize superblock
00:31:49.917  [2024-12-09 16:42:18.942771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.017 ms
00:31:49.917  [2024-12-09 16:42:18.942811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.917  [2024-12-09 16:42:18.950298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.917  [2024-12-09 16:42:18.950463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:31:49.917  [2024-12-09 16:42:18.950585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7.433 ms
00:31:49.917  [2024-12-09 16:42:18.950626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.917  [2024-12-09 16:42:18.950695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.917  [2024-12-09 16:42:18.950731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:31:49.917  [2024-12-09 16:42:18.950762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.024 ms
00:31:49.917  [2024-12-09 16:42:18.950853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.917  [2024-12-09 16:42:18.950959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.917  [2024-12-09 16:42:18.951001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Register IO device
00:31:49.917  [2024-12-09 16:42:18.951035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.010 ms
00:31:49.917  [2024-12-09 16:42:18.951067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.917  [2024-12-09 16:42:18.951235] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread
00:31:49.917  [2024-12-09 16:42:18.955676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.917  [2024-12-09 16:42:18.955827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:31:49.917  [2024-12-09 16:42:18.955962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 4.452 ms
00:31:49.917  [2024-12-09 16:42:18.955979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.917  [2024-12-09 16:42:18.956017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.917  [2024-12-09 16:42:18.956028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decorate bands
00:31:49.917  [2024-12-09 16:42:18.956041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:31:49.917  [2024-12-09 16:42:18.956051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.917  [2024-12-09 16:42:18.956103] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1
00:31:49.917  [2024-12-09 16:42:18.956232] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes
00:31:49.917  [2024-12-09 16:42:18.956253] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes
00:31:49.917  [2024-12-09 16:42:18.956266] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes
00:31:49.917  [2024-12-09 16:42:18.956282] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity:         20480.00 MiB
00:31:49.917  [2024-12-09 16:42:18.956294] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity:       5120.00 MiB
00:31:49.917  [2024-12-09 16:42:18.956308] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries:                    3774873
00:31:49.917  [2024-12-09 16:42:18.956318] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size:               4
00:31:49.917  [2024-12-09 16:42:18.956335] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages:           2048
00:31:49.917  [2024-12-09 16:42:18.956345] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count            5
00:31:49.917  [2024-12-09 16:42:18.956358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.917  [2024-12-09 16:42:18.956368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize layout
00:31:49.917  [2024-12-09 16:42:18.956381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.257 ms
00:31:49.917  [2024-12-09 16:42:18.956391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.917  [2024-12-09 16:42:18.956465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.917  [2024-12-09 16:42:18.956485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Verify layout
00:31:49.917  [2024-12-09 16:42:18.956498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.054 ms
00:31:49.917  [2024-12-09 16:42:18.956507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.917  [2024-12-09 16:42:18.956601] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout:
00:31:49.917  [2024-12-09 16:42:18.956613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb
00:31:49.917  [2024-12-09 16:42:18.956626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:31:49.917  [2024-12-09 16:42:18.956636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:49.917  [2024-12-09 16:42:18.956649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p
00:31:49.917  [2024-12-09 16:42:18.956658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.12 MiB
00:31:49.917  [2024-12-09 16:42:18.956670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      14.50 MiB
00:31:49.917  [2024-12-09 16:42:18.956679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md
00:31:49.917  [2024-12-09 16:42:18.956690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.62 MiB
00:31:49.917  [2024-12-09 16:42:18.956700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:49.917  [2024-12-09 16:42:18.956713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror
00:31:49.917  [2024-12-09 16:42:18.956722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.75 MiB
00:31:49.917  [2024-12-09 16:42:18.956733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:49.917  [2024-12-09 16:42:18.956744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md
00:31:49.917  [2024-12-09 16:42:18.956755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.38 MiB
00:31:49.917  [2024-12-09 16:42:18.956764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:49.917  [2024-12-09 16:42:18.956778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror
00:31:49.917  [2024-12-09 16:42:18.956788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.50 MiB
00:31:49.917  [2024-12-09 16:42:18.956799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:49.917  [2024-12-09 16:42:18.956808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0
00:31:49.917  [2024-12-09 16:42:18.956819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.88 MiB
00:31:49.917  [2024-12-09 16:42:18.956828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:49.917  [2024-12-09 16:42:18.956839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1
00:31:49.917  [2024-12-09 16:42:18.956848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      22.88 MiB
00:31:49.917  [2024-12-09 16:42:18.956859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:49.917  [2024-12-09 16:42:18.956868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2
00:31:49.917  [2024-12-09 16:42:18.956879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      30.88 MiB
00:31:49.917  [2024-12-09 16:42:18.956888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:49.917  [2024-12-09 16:42:18.956914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3
00:31:49.917  [2024-12-09 16:42:18.956924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      38.88 MiB
00:31:49.917  [2024-12-09 16:42:18.956935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:49.917  [2024-12-09 16:42:18.956944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md
00:31:49.917  [2024-12-09 16:42:18.956958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      46.88 MiB
00:31:49.917  [2024-12-09 16:42:18.956967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:49.917  [2024-12-09 16:42:18.956978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror
00:31:49.917  [2024-12-09 16:42:18.956987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.00 MiB
00:31:49.917  [2024-12-09 16:42:18.957000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:49.917  [2024-12-09 16:42:18.957009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log
00:31:49.917  [2024-12-09 16:42:18.957020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.12 MiB
00:31:49.917  [2024-12-09 16:42:18.957029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:49.917  [2024-12-09 16:42:18.957041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror
00:31:49.917  [2024-12-09 16:42:18.957049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.25 MiB
00:31:49.917  [2024-12-09 16:42:18.957061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:49.917  [2024-12-09 16:42:18.957069] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout:
00:31:49.917  [2024-12-09 16:42:18.957081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror
00:31:49.917  [2024-12-09 16:42:18.957091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:31:49.917  [2024-12-09 16:42:18.957103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:49.917  [2024-12-09 16:42:18.957113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap
00:31:49.917  [2024-12-09 16:42:18.957127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      18432.25 MiB
00:31:49.917  [2024-12-09 16:42:18.957148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.88 MiB
00:31:49.917  [2024-12-09 16:42:18.957160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm
00:31:49.917  [2024-12-09 16:42:18.957169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.25 MiB
00:31:49.917  [2024-12-09 16:42:18.957181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      18432.00 MiB
00:31:49.918  [2024-12-09 16:42:18.957192] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc:
00:31:49.918  [2024-12-09 16:42:18.957209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:31:49.918  [2024-12-09 16:42:18.957220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80
00:31:49.918  [2024-12-09 16:42:18.957233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20
00:31:49.918  [2024-12-09 16:42:18.957243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20
00:31:49.918  [2024-12-09 16:42:18.957255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800
00:31:49.918  [2024-12-09 16:42:18.957266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800
00:31:49.918  [2024-12-09 16:42:18.957278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800
00:31:49.918  [2024-12-09 16:42:18.957288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800
00:31:49.918  [2024-12-09 16:42:18.957302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20
00:31:49.918  [2024-12-09 16:42:18.957312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20
00:31:49.918  [2024-12-09 16:42:18.957327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20
00:31:49.918  [2024-12-09 16:42:18.957337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20
00:31:49.918  [2024-12-09 16:42:18.957350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20
00:31:49.918  [2024-12-09 16:42:18.957361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20
00:31:49.918  [2024-12-09 16:42:18.957374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060
00:31:49.918  [2024-12-09 16:42:18.957384] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev:
00:31:49.918  [2024-12-09 16:42:18.957397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:31:49.918  [2024-12-09 16:42:18.957418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:31:49.918  [2024-12-09 16:42:18.957430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000
00:31:49.918  [2024-12-09 16:42:18.957440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0
00:31:49.918  [2024-12-09 16:42:18.957451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0
00:31:49.918  [2024-12-09 16:42:18.957462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.918  [2024-12-09 16:42:18.957474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Layout upgrade
00:31:49.918  [2024-12-09 16:42:18.957484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.919 ms
00:31:49.918  [2024-12-09 16:42:18.957496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.918  [2024-12-09 16:42:18.957534] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while.
00:31:49.918  [2024-12-09 16:42:18.957551] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks
00:31:54.111  [2024-12-09 16:42:22.483182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.111  [2024-12-09 16:42:22.483437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Scrub NV cache
00:31:54.111  [2024-12-09 16:42:22.483529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 3531.369 ms
00:31:54.111  [2024-12-09 16:42:22.483571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.111  [2024-12-09 16:42:22.522018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.111  [2024-12-09 16:42:22.522236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:31:54.111  [2024-12-09 16:42:22.522375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 38.111 ms
00:31:54.111  [2024-12-09 16:42:22.522417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.111  [2024-12-09 16:42:22.522523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.111  [2024-12-09 16:42:22.522562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize band addresses
00:31:54.112  [2024-12-09 16:42:22.522652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.013 ms
00:31:54.112  [2024-12-09 16:42:22.522699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.567646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.567826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:31:54.112  [2024-12-09 16:42:22.567981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 44.944 ms
00:31:54.112  [2024-12-09 16:42:22.568026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.568083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.568126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:31:54.112  [2024-12-09 16:42:22.568158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:31:54.112  [2024-12-09 16:42:22.568257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.568781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.568830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:31:54.112  [2024-12-09 16:42:22.568942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.423 ms
00:31:54.112  [2024-12-09 16:42:22.568983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.569290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.569335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:31:54.112  [2024-12-09 16:42:22.569368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.020 ms
00:31:54.112  [2024-12-09 16:42:22.569403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.587141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.587307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:31:54.112  [2024-12-09 16:42:22.587461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 17.728 ms
00:31:54.112  [2024-12-09 16:42:22.587501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.624586] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB
00:31:54.112  [2024-12-09 16:42:22.626002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.626044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize L2P
00:31:54.112  [2024-12-09 16:42:22.626070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 38.476 ms
00:31:54.112  [2024-12-09 16:42:22.626087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.659257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.659299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Clear L2P
00:31:54.112  [2024-12-09 16:42:22.659315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 33.175 ms
00:31:54.112  [2024-12-09 16:42:22.659326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.659424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.659441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize band initialization
00:31:54.112  [2024-12-09 16:42:22.659457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.054 ms
00:31:54.112  [2024-12-09 16:42:22.659467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.693682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.693721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Save initial band info metadata
00:31:54.112  [2024-12-09 16:42:22.693738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 34.215 ms
00:31:54.112  [2024-12-09 16:42:22.693748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.727497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.727535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Save initial chunk info metadata
00:31:54.112  [2024-12-09 16:42:22.727550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 33.753 ms
00:31:54.112  [2024-12-09 16:42:22.727575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.728243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.728266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize P2L checkpointing
00:31:54.112  [2024-12-09 16:42:22.728281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.631 ms
00:31:54.112  [2024-12-09 16:42:22.728294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.826262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.826303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Wipe P2L region
00:31:54.112  [2024-12-09 16:42:22.826339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 98.069 ms
00:31:54.112  [2024-12-09 16:42:22.826350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.861749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.861795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Clear trim map
00:31:54.112  [2024-12-09 16:42:22.861812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 35.370 ms
00:31:54.112  [2024-12-09 16:42:22.861823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.895433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.895467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Clear trim log
00:31:54.112  [2024-12-09 16:42:22.895483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 33.620 ms
00:31:54.112  [2024-12-09 16:42:22.895509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.928853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.929041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL dirty state
00:31:54.112  [2024-12-09 16:42:22.929068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 33.354 ms
00:31:54.112  [2024-12-09 16:42:22.929079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.929163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.929175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Start core poller
00:31:54.112  [2024-12-09 16:42:22.929192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.005 ms
00:31:54.112  [2024-12-09 16:42:22.929202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.929298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:54.112  [2024-12-09 16:42:22.929312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize initialization
00:31:54.112  [2024-12-09 16:42:22.929325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.031 ms
00:31:54.112  [2024-12-09 16:42:22.929334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:54.112  [2024-12-09 16:42:22.930325] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3996.846 ms, result 0
00:31:54.112  {
00:31:54.112    "name": "ftl",
00:31:54.112    "uuid": "8acfe7ce-843d-4f42-a2af-cd7dd442e3e7"
00:31:54.112  }
00:31:54.112   16:42:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP
00:31:54.112  [2024-12-09 16:42:23.129342] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:54.112   16:42:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1
00:31:54.371   16:42:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl
00:31:54.372  [2024-12-09 16:42:23.501367] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000
00:31:54.372   16:42:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1
00:31:54.631  [2024-12-09 16:42:23.702227] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:31:54.631   16:42:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024
00:31:54.917  Fill FTL, iteration 1
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=()
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 ))
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations ))
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1'
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]]
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84887
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84887 /var/tmp/spdk.tgt.sock
00:31:54.917  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84887 ']'
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...'
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:54.917   16:42:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:31:55.216  [2024-12-09 16:42:24.174746] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:31:55.216  [2024-12-09 16:42:24.175093] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84887 ]
00:31:55.216  [2024-12-09 16:42:24.360285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:55.476  [2024-12-09 16:42:24.467411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:56.414   16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:56.414   16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:31:56.415   16:42:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0
00:31:56.415  ftln1
00:31:56.415   16:42:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": ['
00:31:56.415   16:42:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev
00:31:56.674   16:42:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}'
00:31:56.674   16:42:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84887
00:31:56.674   16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84887 ']'
00:31:56.674   16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84887
00:31:56.674    16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname
00:31:56.674   16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:56.674    16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84887
00:31:56.674   16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:31:56.674   16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:31:56.674   16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84887'
00:31:56.674  killing process with pid 84887
00:31:56.674   16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84887
00:31:56.674   16:42:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84887
00:31:59.212   16:42:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid
00:31:59.212   16:42:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0
00:31:59.212  [2024-12-09 16:42:28.131453] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:31:59.212  [2024-12-09 16:42:28.131574] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84940 ]
00:31:59.212  [2024-12-09 16:42:28.315681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:59.471  [2024-12-09 16:42:28.423454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:00.850  
[2024-12-09T16:42:30.967Z] Copying: 252/1024 [MB] (252 MBps)
[2024-12-09T16:42:31.905Z] Copying: 515/1024 [MB] (263 MBps)
[2024-12-09T16:42:32.844Z] Copying: 777/1024 [MB] (262 MBps)
[2024-12-09T16:42:34.224Z] Copying: 1024/1024 [MB] (average 260 MBps)
00:32:05.045  
00:32:05.045   16:42:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024
00:32:05.045  Calculate MD5 checksum, iteration 1
00:32:05.045   16:42:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1'
00:32:05.045   16:42:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:32:05.045   16:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:32:05.045   16:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:32:05.045   16:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:32:05.045   16:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:32:05.045   16:42:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:32:05.045  [2024-12-09 16:42:34.111515] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:32:05.045  [2024-12-09 16:42:34.111632] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85000 ]
00:32:05.304  [2024-12-09 16:42:34.292025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:05.305  [2024-12-09 16:42:34.418720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:07.210  
[2024-12-09T16:42:36.647Z] Copying: 668/1024 [MB] (668 MBps)
[2024-12-09T16:42:37.585Z] Copying: 1024/1024 [MB] (average 661 MBps)
00:32:08.406  
00:32:08.406   16:42:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024
00:32:08.406   16:42:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:32:10.313    16:42:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d '
00:32:10.313  Fill FTL, iteration 2
00:32:10.314   16:42:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a725ec3f50ed6a9dcffe8d7216515fa1
00:32:10.314   16:42:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ ))
00:32:10.314   16:42:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations ))
00:32:10.314   16:42:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2'
00:32:10.314   16:42:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024
00:32:10.314   16:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:32:10.314   16:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:32:10.314   16:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:32:10.314   16:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:32:10.314   16:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024
00:32:10.314  [2024-12-09 16:42:39.233150] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:32:10.314  [2024-12-09 16:42:39.233444] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85057 ]
00:32:10.314  [2024-12-09 16:42:39.414824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:10.572  [2024-12-09 16:42:39.542409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:11.952  
[2024-12-09T16:42:42.068Z] Copying: 270/1024 [MB] (270 MBps)
[2024-12-09T16:42:43.447Z] Copying: 537/1024 [MB] (267 MBps)
[2024-12-09T16:42:44.016Z] Copying: 803/1024 [MB] (266 MBps)
[2024-12-09T16:42:45.395Z] Copying: 1024/1024 [MB] (average 267 MBps)
00:32:16.216  
00:32:16.216   16:42:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048
00:32:16.216   16:42:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2'
00:32:16.216  Calculate MD5 checksum, iteration 2
00:32:16.216   16:42:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:32:16.216   16:42:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:32:16.216   16:42:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:32:16.216   16:42:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:32:16.216   16:42:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:32:16.216   16:42:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:32:16.216  [2024-12-09 16:42:45.163315] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:32:16.216  [2024-12-09 16:42:45.163453] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85121 ]
00:32:16.216  [2024-12-09 16:42:45.345532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:16.475  [2024-12-09 16:42:45.472449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:18.382  
[2024-12-09T16:42:47.821Z] Copying: 657/1024 [MB] (657 MBps)
[2024-12-09T16:42:49.201Z] Copying: 1024/1024 [MB] (average 655 MBps)
00:32:20.022  
00:32:20.022   16:42:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048
00:32:20.022   16:42:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:32:21.929    16:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d '
00:32:21.929   16:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5bc6ca105cf95e699e753dd66ccae36f
00:32:21.929   16:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ ))
00:32:21.929   16:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations ))
00:32:21.929   16:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true
00:32:21.929  [2024-12-09 16:42:50.863114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:21.929  [2024-12-09 16:42:50.863159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:32:21.929  [2024-12-09 16:42:50.863174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.021 ms
00:32:21.929  [2024-12-09 16:42:50.863185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:21.930  [2024-12-09 16:42:50.863216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:21.930  [2024-12-09 16:42:50.863232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:32:21.930  [2024-12-09 16:42:50.863243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:32:21.930  [2024-12-09 16:42:50.863253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:21.930  [2024-12-09 16:42:50.863274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:21.930  [2024-12-09 16:42:50.863285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:32:21.930  [2024-12-09 16:42:50.863296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:32:21.930  [2024-12-09 16:42:50.863306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:21.930  [2024-12-09 16:42:50.863370] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.255 ms, result 0
00:32:21.930  true
00:32:21.930   16:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:32:21.930  {
00:32:21.930    "name": "ftl",
00:32:21.930    "properties": [
00:32:21.930      {
00:32:21.930        "name": "superblock_version",
00:32:21.930        "value": 5,
00:32:21.930        "read-only": true
00:32:21.930      },
00:32:21.930      {
00:32:21.930        "name": "base_device",
00:32:21.930        "bands": [
00:32:21.930          {
00:32:21.930            "id": 0,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 1,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 2,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 3,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 4,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 5,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 6,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 7,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 8,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 9,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 10,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 11,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 12,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 13,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 14,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 15,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 16,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 17,
00:32:21.930            "state": "FREE",
00:32:21.930            "validity": 0.0
00:32:21.930          }
00:32:21.930        ],
00:32:21.930        "read-only": true
00:32:21.930      },
00:32:21.930      {
00:32:21.930        "name": "cache_device",
00:32:21.930        "type": "bdev",
00:32:21.930        "chunks": [
00:32:21.930          {
00:32:21.930            "id": 0,
00:32:21.930            "state": "INACTIVE",
00:32:21.930            "utilization": 0.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 1,
00:32:21.930            "state": "CLOSED",
00:32:21.930            "utilization": 1.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 2,
00:32:21.930            "state": "CLOSED",
00:32:21.930            "utilization": 1.0
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 3,
00:32:21.930            "state": "OPEN",
00:32:21.930            "utilization": 0.001953125
00:32:21.930          },
00:32:21.930          {
00:32:21.930            "id": 4,
00:32:21.930            "state": "OPEN",
00:32:21.930            "utilization": 0.0
00:32:21.930          }
00:32:21.930        ],
00:32:21.930        "read-only": true
00:32:21.930      },
00:32:21.930      {
00:32:21.930        "name": "verbose_mode",
00:32:21.930        "value": true,
00:32:21.930        "unit": "",
00:32:21.930        "desc": "In verbose mode, user is able to get access to additional advanced FTL properties"
00:32:21.930      },
00:32:21.930      {
00:32:21.930        "name": "prep_upgrade_on_shutdown",
00:32:21.930        "value": false,
00:32:21.930        "unit": "",
00:32:21.930        "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version"
00:32:21.930      }
00:32:21.930    ]
00:32:21.930  }
00:32:21.930   16:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true
00:32:22.190  [2024-12-09 16:42:51.250770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:22.190  [2024-12-09 16:42:51.250809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:32:22.190  [2024-12-09 16:42:51.250822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:22.190  [2024-12-09 16:42:51.250848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:22.190  [2024-12-09 16:42:51.250870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:22.190  [2024-12-09 16:42:51.250880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:32:22.190  [2024-12-09 16:42:51.250890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:32:22.190  [2024-12-09 16:42:51.250900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:22.190  [2024-12-09 16:42:51.250934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:22.190  [2024-12-09 16:42:51.250945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:32:22.190  [2024-12-09 16:42:51.250955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:32:22.190  [2024-12-09 16:42:51.250964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:22.190  [2024-12-09 16:42:51.251021] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.231 ms, result 0
00:32:22.190  true
00:32:22.190    16:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties
00:32:22.190    16:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:32:22.190    16:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length'
00:32:22.449   16:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3
00:32:22.449   16:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]]
00:32:22.449   16:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true
00:32:22.709  [2024-12-09 16:42:51.678451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:22.709  [2024-12-09 16:42:51.678493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:32:22.709  [2024-12-09 16:42:51.678506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:22.709  [2024-12-09 16:42:51.678516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:22.709  [2024-12-09 16:42:51.678538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:22.709  [2024-12-09 16:42:51.678548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:32:22.709  [2024-12-09 16:42:51.678557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:32:22.709  [2024-12-09 16:42:51.678566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:22.709  [2024-12-09 16:42:51.678585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:22.709  [2024-12-09 16:42:51.678595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:32:22.709  [2024-12-09 16:42:51.678604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:32:22.709  [2024-12-09 16:42:51.678613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:22.709  [2024-12-09 16:42:51.678663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.203 ms, result 0
00:32:22.709  true
00:32:22.709   16:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:32:22.709  {
00:32:22.709    "name": "ftl",
00:32:22.709    "properties": [
00:32:22.709      {
00:32:22.709        "name": "superblock_version",
00:32:22.709        "value": 5,
00:32:22.709        "read-only": true
00:32:22.709      },
00:32:22.709      {
00:32:22.709        "name": "base_device",
00:32:22.709        "bands": [
00:32:22.709          {
00:32:22.709            "id": 0,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 1,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 2,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 3,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 4,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 5,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 6,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 7,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 8,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 9,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 10,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 11,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 12,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 13,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 14,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 15,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 16,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 17,
00:32:22.709            "state": "FREE",
00:32:22.709            "validity": 0.0
00:32:22.709          }
00:32:22.709        ],
00:32:22.709        "read-only": true
00:32:22.709      },
00:32:22.709      {
00:32:22.709        "name": "cache_device",
00:32:22.709        "type": "bdev",
00:32:22.709        "chunks": [
00:32:22.709          {
00:32:22.709            "id": 0,
00:32:22.709            "state": "INACTIVE",
00:32:22.709            "utilization": 0.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 1,
00:32:22.709            "state": "CLOSED",
00:32:22.709            "utilization": 1.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 2,
00:32:22.709            "state": "CLOSED",
00:32:22.709            "utilization": 1.0
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 3,
00:32:22.709            "state": "OPEN",
00:32:22.709            "utilization": 0.001953125
00:32:22.709          },
00:32:22.709          {
00:32:22.709            "id": 4,
00:32:22.709            "state": "OPEN",
00:32:22.709            "utilization": 0.0
00:32:22.709          }
00:32:22.709        ],
00:32:22.709        "read-only": true
00:32:22.709      },
00:32:22.709      {
00:32:22.709        "name": "verbose_mode",
00:32:22.709        "value": true,
00:32:22.709        "unit": "",
00:32:22.709        "desc": "In verbose mode, user is able to get access to additional advanced FTL properties"
00:32:22.709      },
00:32:22.709      {
00:32:22.709        "name": "prep_upgrade_on_shutdown",
00:32:22.709        "value": true,
00:32:22.709        "unit": "",
00:32:22.709        "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version"
00:32:22.709      }
00:32:22.709    ]
00:32:22.709  }
00:32:22.709   16:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown
00:32:22.709   16:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84759 ]]
00:32:22.709   16:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84759
00:32:22.709   16:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84759 ']'
00:32:22.709   16:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84759
00:32:22.969    16:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname
00:32:22.969   16:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:22.969    16:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84759
00:32:22.969  killing process with pid 84759
00:32:22.969   16:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:32:22.969   16:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:32:22.969   16:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84759'
00:32:22.969   16:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84759
00:32:22.969   16:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84759
00:32:23.907  [2024-12-09 16:42:52.991661] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000
00:32:23.907  [2024-12-09 16:42:53.011817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:23.907  [2024-12-09 16:42:53.011857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinit core IO channel
00:32:23.907  [2024-12-09 16:42:53.011873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:23.907  [2024-12-09 16:42:53.011887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:23.907  [2024-12-09 16:42:53.011935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread
00:32:23.907  [2024-12-09 16:42:53.015966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:23.907  [2024-12-09 16:42:53.015993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Unregister IO device
00:32:23.907  [2024-12-09 16:42:53.016011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 4.020 ms
00:32:23.907  [2024-12-09 16:42:53.016026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.059  [2024-12-09 16:43:00.120140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.059  [2024-12-09 16:43:00.120199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Stop core poller
00:32:32.059  [2024-12-09 16:43:00.120219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7115.621 ms
00:32:32.059  [2024-12-09 16:43:00.120229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.059  [2024-12-09 16:43:00.121367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.059  [2024-12-09 16:43:00.121392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist L2P
00:32:32.059  [2024-12-09 16:43:00.121404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.123 ms
00:32:32.059  [2024-12-09 16:43:00.121414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.059  [2024-12-09 16:43:00.122448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.059  [2024-12-09 16:43:00.122475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finish L2P trims
00:32:32.059  [2024-12-09 16:43:00.122493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.903 ms
00:32:32.059  [2024-12-09 16:43:00.122509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.059  [2024-12-09 16:43:00.137360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.059  [2024-12-09 16:43:00.137394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist NV cache metadata
00:32:32.059  [2024-12-09 16:43:00.137406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 14.840 ms
00:32:32.059  [2024-12-09 16:43:00.137431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.059  [2024-12-09 16:43:00.146395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.059  [2024-12-09 16:43:00.146432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist valid map metadata
00:32:32.059  [2024-12-09 16:43:00.146444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 8.942 ms
00:32:32.059  [2024-12-09 16:43:00.146454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.059  [2024-12-09 16:43:00.146544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.059  [2024-12-09 16:43:00.146563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist P2L metadata
00:32:32.059  [2024-12-09 16:43:00.146573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.056 ms
00:32:32.059  [2024-12-09 16:43:00.146583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.059  [2024-12-09 16:43:00.160241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.059  [2024-12-09 16:43:00.160274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist band info metadata
00:32:32.059  [2024-12-09 16:43:00.160286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 13.663 ms
00:32:32.059  [2024-12-09 16:43:00.160295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.059  [2024-12-09 16:43:00.174352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.059  [2024-12-09 16:43:00.174384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist trim metadata
00:32:32.059  [2024-12-09 16:43:00.174396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 14.046 ms
00:32:32.059  [2024-12-09 16:43:00.174420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.059  [2024-12-09 16:43:00.188003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.059  [2024-12-09 16:43:00.188142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist superblock
00:32:32.059  [2024-12-09 16:43:00.188161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 13.569 ms
00:32:32.059  [2024-12-09 16:43:00.188188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.059  [2024-12-09 16:43:00.202467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.059  [2024-12-09 16:43:00.202599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL clean state
00:32:32.059  [2024-12-09 16:43:00.202616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 14.194 ms
00:32:32.059  [2024-12-09 16:43:00.202625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.059  [2024-12-09 16:43:00.202713] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity:
00:32:32.059  [2024-12-09 16:43:00.202740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:32:32.059  [2024-12-09 16:43:00.202752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   2:   261120 / 261120 	wr_cnt: 1	state: closed
00:32:32.059  [2024-12-09 16:43:00.202764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   3:     2048 / 261120 	wr_cnt: 1	state: closed
00:32:32.059  [2024-12-09 16:43:00.202775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:32:32.059  [2024-12-09 16:43:00.202961] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 
00:32:32.059  [2024-12-09 16:43:00.202972] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID:         8acfe7ce-843d-4f42-a2af-cd7dd442e3e7
00:32:32.059  [2024-12-09 16:43:00.202983] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs:    524288
00:32:32.059  [2024-12-09 16:43:00.202993] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes:        786752
00:32:32.059  [2024-12-09 16:43:00.203003] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes:         524288
00:32:32.059  [2024-12-09 16:43:00.203013] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF:                 1.5006
00:32:32.059  [2024-12-09 16:43:00.203028] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits:
00:32:32.059  [2024-12-09 16:43:00.203038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   crit: 0
00:32:32.059  [2024-12-09 16:43:00.203052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   high: 0
00:32:32.059  [2024-12-09 16:43:00.203061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]    low: 0
00:32:32.059  [2024-12-09 16:43:00.203070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]  start: 0
00:32:32.059  [2024-12-09 16:43:00.203081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.060  [2024-12-09 16:43:00.203091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Dump statistics
00:32:32.060  [2024-12-09 16:43:00.203101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.369 ms
00:32:32.060  [2024-12-09 16:43:00.203111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.222272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.060  [2024-12-09 16:43:00.222404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize L2P
00:32:32.060  [2024-12-09 16:43:00.222541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 19.160 ms
00:32:32.060  [2024-12-09 16:43:00.222578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.223119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:32.060  [2024-12-09 16:43:00.223231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize P2L checkpointing
00:32:32.060  [2024-12-09 16:43:00.223299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.499 ms
00:32:32.060  [2024-12-09 16:43:00.223333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.285745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.285903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:32:32.060  [2024-12-09 16:43:00.286000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.286037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.286087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.286118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:32:32.060  [2024-12-09 16:43:00.286147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.286175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.286270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.286454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:32:32.060  [2024-12-09 16:43:00.286496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.286525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.286568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.286599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:32:32.060  [2024-12-09 16:43:00.286627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.286655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.400533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.400725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:32:32.060  [2024-12-09 16:43:00.400846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.400882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.493534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.493707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:32:32.060  [2024-12-09 16:43:00.493825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.493861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.493998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.494090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:32:32.060  [2024-12-09 16:43:00.494126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.494161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.494294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.494332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:32:32.060  [2024-12-09 16:43:00.494371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.494400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.494555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.494596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:32:32.060  [2024-12-09 16:43:00.494626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.494655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.494777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.494815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize superblock
00:32:32.060  [2024-12-09 16:43:00.494856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.494886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.495143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.495180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:32:32.060  [2024-12-09 16:43:00.495210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.495238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.495317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:32.060  [2024-12-09 16:43:00.495539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:32:32.060  [2024-12-09 16:43:00.495574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:32.060  [2024-12-09 16:43:00.495603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:32.060  [2024-12-09 16:43:00.495754] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7496.060 ms, result 0
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev=
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev=
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85312
00:32:34.597  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85312
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85312 ']'
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:32:34.597   16:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:32:34.597  [2024-12-09 16:43:03.654666] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:32:34.597  [2024-12-09 16:43:03.654798] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85312 ]
00:32:34.857  [2024-12-09 16:43:03.841485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:34.857  [2024-12-09 16:43:03.947486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:32:35.793  [2024-12-09 16:43:04.883042] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:32:35.793  [2024-12-09 16:43:04.883112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:32:36.052  [2024-12-09 16:43:05.029510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.029711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Check configuration
00:32:36.052  [2024-12-09 16:43:05.029735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.005 ms
00:32:36.052  [2024-12-09 16:43:05.029746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.029816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.029829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:32:36.052  [2024-12-09 16:43:05.029840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.043 ms
00:32:36.052  [2024-12-09 16:43:05.029849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.029880] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache
00:32:36.052  [2024-12-09 16:43:05.030804] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device
00:32:36.052  [2024-12-09 16:43:05.030827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.030838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:32:36.052  [2024-12-09 16:43:05.030848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.960 ms
00:32:36.052  [2024-12-09 16:43:05.030859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.032399] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0
00:32:36.052  [2024-12-09 16:43:05.050445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.050484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Load super block
00:32:36.052  [2024-12-09 16:43:05.050503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 18.077 ms
00:32:36.052  [2024-12-09 16:43:05.050513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.050573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.050584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Validate super block
00:32:36.052  [2024-12-09 16:43:05.050595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.022 ms
00:32:36.052  [2024-12-09 16:43:05.050604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.057402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.057437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:32:36.052  [2024-12-09 16:43:05.057449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 6.736 ms
00:32:36.052  [2024-12-09 16:43:05.057459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.057520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.057533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:32:36.052  [2024-12-09 16:43:05.057544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.040 ms
00:32:36.052  [2024-12-09 16:43:05.057554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.057603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.057622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Register IO device
00:32:36.052  [2024-12-09 16:43:05.057633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.009 ms
00:32:36.052  [2024-12-09 16:43:05.057648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.057679] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread
00:32:36.052  [2024-12-09 16:43:05.062526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.062557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:32:36.052  [2024-12-09 16:43:05.062568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 4.861 ms
00:32:36.052  [2024-12-09 16:43:05.062597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.062627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.062638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decorate bands
00:32:36.052  [2024-12-09 16:43:05.062648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:36.052  [2024-12-09 16:43:05.062658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.062712] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0
00:32:36.052  [2024-12-09 16:43:05.062740] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes
00:32:36.052  [2024-12-09 16:43:05.062774] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes
00:32:36.052  [2024-12-09 16:43:05.062791] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes
00:32:36.052  [2024-12-09 16:43:05.062877] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes
00:32:36.052  [2024-12-09 16:43:05.062889] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes
00:32:36.052  [2024-12-09 16:43:05.062902] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes
00:32:36.052  [2024-12-09 16:43:05.062934] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity:         20480.00 MiB
00:32:36.052  [2024-12-09 16:43:05.062946] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity:       5120.00 MiB
00:32:36.052  [2024-12-09 16:43:05.062961] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries:                    3774873
00:32:36.052  [2024-12-09 16:43:05.062970] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size:               4
00:32:36.052  [2024-12-09 16:43:05.062980] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages:           2048
00:32:36.052  [2024-12-09 16:43:05.062990] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count            5
00:32:36.052  [2024-12-09 16:43:05.063000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.063009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize layout
00:32:36.052  [2024-12-09 16:43:05.063020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.291 ms
00:32:36.052  [2024-12-09 16:43:05.063031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.063127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.052  [2024-12-09 16:43:05.063138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Verify layout
00:32:36.052  [2024-12-09 16:43:05.063152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.078 ms
00:32:36.052  [2024-12-09 16:43:05.063162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.052  [2024-12-09 16:43:05.063251] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout:
00:32:36.052  [2024-12-09 16:43:05.063263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb
00:32:36.052  [2024-12-09 16:43:05.063274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:32:36.052  [2024-12-09 16:43:05.063283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p
00:32:36.052  [2024-12-09 16:43:05.063303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      14.50 MiB
00:32:36.052  [2024-12-09 16:43:05.063322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md
00:32:36.052  [2024-12-09 16:43:05.063331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.62 MiB
00:32:36.052  [2024-12-09 16:43:05.063340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror
00:32:36.052  [2024-12-09 16:43:05.063358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.75 MiB
00:32:36.052  [2024-12-09 16:43:05.063367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md
00:32:36.052  [2024-12-09 16:43:05.063401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.38 MiB
00:32:36.052  [2024-12-09 16:43:05.063411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror
00:32:36.052  [2024-12-09 16:43:05.063429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.50 MiB
00:32:36.052  [2024-12-09 16:43:05.063438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0
00:32:36.052  [2024-12-09 16:43:05.063457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.88 MiB
00:32:36.052  [2024-12-09 16:43:05.063466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:36.052  [2024-12-09 16:43:05.063475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1
00:32:36.052  [2024-12-09 16:43:05.063495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      22.88 MiB
00:32:36.052  [2024-12-09 16:43:05.063506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:36.052  [2024-12-09 16:43:05.063515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2
00:32:36.052  [2024-12-09 16:43:05.063525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      30.88 MiB
00:32:36.052  [2024-12-09 16:43:05.063534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:36.052  [2024-12-09 16:43:05.063543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3
00:32:36.052  [2024-12-09 16:43:05.063553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      38.88 MiB
00:32:36.052  [2024-12-09 16:43:05.063562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:36.052  [2024-12-09 16:43:05.063571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md
00:32:36.052  [2024-12-09 16:43:05.063580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      46.88 MiB
00:32:36.052  [2024-12-09 16:43:05.063589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror
00:32:36.052  [2024-12-09 16:43:05.063608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.00 MiB
00:32:36.052  [2024-12-09 16:43:05.063617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log
00:32:36.052  [2024-12-09 16:43:05.063635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror
00:32:36.052  [2024-12-09 16:43:05.063663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.25 MiB
00:32:36.052  [2024-12-09 16:43:05.063672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063681] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout:
00:32:36.052  [2024-12-09 16:43:05.063691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror
00:32:36.052  [2024-12-09 16:43:05.063701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:32:36.052  [2024-12-09 16:43:05.063710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:36.052  [2024-12-09 16:43:05.063724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap
00:32:36.052  [2024-12-09 16:43:05.063734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      18432.25 MiB
00:32:36.052  [2024-12-09 16:43:05.063743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.88 MiB
00:32:36.052  [2024-12-09 16:43:05.063752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm
00:32:36.052  [2024-12-09 16:43:05.063762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.25 MiB
00:32:36.052  [2024-12-09 16:43:05.063771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      18432.00 MiB
00:32:36.052  [2024-12-09 16:43:05.063782] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc:
00:32:36.052  [2024-12-09 16:43:05.063794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:32:36.052  [2024-12-09 16:43:05.063808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80
00:32:36.052  [2024-12-09 16:43:05.063818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20
00:32:36.052  [2024-12-09 16:43:05.063829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20
00:32:36.052  [2024-12-09 16:43:05.063839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800
00:32:36.052  [2024-12-09 16:43:05.063849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800
00:32:36.052  [2024-12-09 16:43:05.063859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800
00:32:36.052  [2024-12-09 16:43:05.063869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800
00:32:36.052  [2024-12-09 16:43:05.063880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20
00:32:36.052  [2024-12-09 16:43:05.063891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20
00:32:36.052  [2024-12-09 16:43:05.063901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20
00:32:36.052  [2024-12-09 16:43:05.063911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20
00:32:36.052  [2024-12-09 16:43:05.063921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20
00:32:36.052  [2024-12-09 16:43:05.063931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20
00:32:36.052  [2024-12-09 16:43:05.063957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060
00:32:36.052  [2024-12-09 16:43:05.063968] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev:
00:32:36.053  [2024-12-09 16:43:05.063980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:32:36.053  [2024-12-09 16:43:05.063991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:32:36.053  [2024-12-09 16:43:05.064001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000
00:32:36.053  [2024-12-09 16:43:05.064012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0
00:32:36.053  [2024-12-09 16:43:05.064024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0
00:32:36.053  [2024-12-09 16:43:05.064034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:36.053  [2024-12-09 16:43:05.064044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Layout upgrade
00:32:36.053  [2024-12-09 16:43:05.064054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.837 ms
00:32:36.053  [2024-12-09 16:43:05.064064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:36.053  [2024-12-09 16:43:05.064109] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while.
00:32:36.053  [2024-12-09 16:43:05.064122] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks
00:32:40.241  [2024-12-09 16:43:08.721020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.241  [2024-12-09 16:43:08.721208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Scrub NV cache
00:32:40.241  [2024-12-09 16:43:08.721249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 3662.847 ms
00:32:40.241  [2024-12-09 16:43:08.721262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.241  [2024-12-09 16:43:08.756404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.241  [2024-12-09 16:43:08.756444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:32:40.241  [2024-12-09 16:43:08.756460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 34.875 ms
00:32:40.241  [2024-12-09 16:43:08.756469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.241  [2024-12-09 16:43:08.756564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.241  [2024-12-09 16:43:08.756581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize band addresses
00:32:40.241  [2024-12-09 16:43:08.756592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.013 ms
00:32:40.241  [2024-12-09 16:43:08.756602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.241  [2024-12-09 16:43:08.801693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.241  [2024-12-09 16:43:08.801733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:32:40.241  [2024-12-09 16:43:08.801750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 45.121 ms
00:32:40.241  [2024-12-09 16:43:08.801777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.241  [2024-12-09 16:43:08.801809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.241  [2024-12-09 16:43:08.801820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:32:40.242  [2024-12-09 16:43:08.801830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:32:40.242  [2024-12-09 16:43:08.801840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.802474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.802589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:32:40.242  [2024-12-09 16:43:08.802668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.579 ms
00:32:40.242  [2024-12-09 16:43:08.802683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.802737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.802748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:32:40.242  [2024-12-09 16:43:08.802759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.019 ms
00:32:40.242  [2024-12-09 16:43:08.802770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.822865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.822908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:32:40.242  [2024-12-09 16:43:08.822922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 20.103 ms
00:32:40.242  [2024-12-09 16:43:08.822932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.853029] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4
00:32:40.242  [2024-12-09 16:43:08.853072] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully
00:32:40.242  [2024-12-09 16:43:08.853086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.853096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore NV cache metadata
00:32:40.242  [2024-12-09 16:43:08.853107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 30.083 ms
00:32:40.242  [2024-12-09 16:43:08.853116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.872190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.872351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore valid map metadata
00:32:40.242  [2024-12-09 16:43:08.872389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 19.056 ms
00:32:40.242  [2024-12-09 16:43:08.872401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.889517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.889554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore band info metadata
00:32:40.242  [2024-12-09 16:43:08.889566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 17.080 ms
00:32:40.242  [2024-12-09 16:43:08.889591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.906379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.906534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore trim metadata
00:32:40.242  [2024-12-09 16:43:08.906569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 16.774 ms
00:32:40.242  [2024-12-09 16:43:08.906579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.907362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.907389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize P2L checkpointing
00:32:40.242  [2024-12-09 16:43:08.907401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.682 ms
00:32:40.242  [2024-12-09 16:43:08.907411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.988639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.988697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore P2L checkpoints
00:32:40.242  [2024-12-09 16:43:08.988713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 81.336 ms
00:32:40.242  [2024-12-09 16:43:08.988724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.998871] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB
00:32:40.242  [2024-12-09 16:43:08.999497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.999528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize L2P
00:32:40.242  [2024-12-09 16:43:08.999541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 10.744 ms
00:32:40.242  [2024-12-09 16:43:08.999551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.999620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.999636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore L2P
00:32:40.242  [2024-12-09 16:43:08.999647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.006 ms
00:32:40.242  [2024-12-09 16:43:08.999657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.999715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.999728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize band initialization
00:32:40.242  [2024-12-09 16:43:08.999738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.016 ms
00:32:40.242  [2024-12-09 16:43:08.999748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.999770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.999781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Start core poller
00:32:40.242  [2024-12-09 16:43:08.999795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:40.242  [2024-12-09 16:43:08.999804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:08.999838] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped
00:32:40.242  [2024-12-09 16:43:08.999850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:08.999860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Self test on startup
00:32:40.242  [2024-12-09 16:43:08.999871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.013 ms
00:32:40.242  [2024-12-09 16:43:08.999880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:09.034053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:09.034092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL dirty state
00:32:40.242  [2024-12-09 16:43:09.034105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 34.189 ms
00:32:40.242  [2024-12-09 16:43:09.034132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:09.034204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.242  [2024-12-09 16:43:09.034216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize initialization
00:32:40.242  [2024-12-09 16:43:09.034237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.031 ms
00:32:40.242  [2024-12-09 16:43:09.034247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.242  [2024-12-09 16:43:09.035363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4011.903 ms, result 0
00:32:40.242  [2024-12-09 16:43:09.050376] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:40.242  [2024-12-09 16:43:09.066374] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000
00:32:40.242  [2024-12-09 16:43:09.074888] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:32:40.501   16:43:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:40.501   16:43:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:32:40.501   16:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:32:40.501   16:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0
00:32:40.501   16:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true
00:32:40.759  [2024-12-09 16:43:09.790118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.759  [2024-12-09 16:43:09.790156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:32:40.759  [2024-12-09 16:43:09.790173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.006 ms
00:32:40.759  [2024-12-09 16:43:09.790200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.759  [2024-12-09 16:43:09.790222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.759  [2024-12-09 16:43:09.790233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:32:40.759  [2024-12-09 16:43:09.790243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:32:40.759  [2024-12-09 16:43:09.790253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.759  [2024-12-09 16:43:09.790272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:40.759  [2024-12-09 16:43:09.790283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:32:40.759  [2024-12-09 16:43:09.790293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:32:40.759  [2024-12-09 16:43:09.790303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:40.759  [2024-12-09 16:43:09.790355] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.230 ms, result 0
00:32:40.759  true
00:32:40.759   16:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:32:41.019  {
00:32:41.019    "name": "ftl",
00:32:41.019    "properties": [
00:32:41.019      {
00:32:41.019        "name": "superblock_version",
00:32:41.019        "value": 5,
00:32:41.019        "read-only": true
00:32:41.019      },
00:32:41.019      {
00:32:41.019        "name": "base_device",
00:32:41.019        "bands": [
00:32:41.019          {
00:32:41.019            "id": 0,
00:32:41.019            "state": "CLOSED",
00:32:41.019            "validity": 1.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 1,
00:32:41.019            "state": "CLOSED",
00:32:41.019            "validity": 1.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 2,
00:32:41.019            "state": "CLOSED",
00:32:41.019            "validity": 0.007843137254901933
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 3,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 4,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 5,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 6,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 7,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 8,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 9,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 10,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 11,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 12,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 13,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 14,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 15,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 16,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 17,
00:32:41.019            "state": "FREE",
00:32:41.019            "validity": 0.0
00:32:41.019          }
00:32:41.019        ],
00:32:41.019        "read-only": true
00:32:41.019      },
00:32:41.019      {
00:32:41.019        "name": "cache_device",
00:32:41.019        "type": "bdev",
00:32:41.019        "chunks": [
00:32:41.019          {
00:32:41.019            "id": 0,
00:32:41.019            "state": "INACTIVE",
00:32:41.019            "utilization": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 1,
00:32:41.019            "state": "OPEN",
00:32:41.019            "utilization": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 2,
00:32:41.019            "state": "OPEN",
00:32:41.019            "utilization": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 3,
00:32:41.019            "state": "FREE",
00:32:41.019            "utilization": 0.0
00:32:41.019          },
00:32:41.019          {
00:32:41.019            "id": 4,
00:32:41.019            "state": "FREE",
00:32:41.019            "utilization": 0.0
00:32:41.019          }
00:32:41.019        ],
00:32:41.019        "read-only": true
00:32:41.019      },
00:32:41.019      {
00:32:41.019        "name": "verbose_mode",
00:32:41.019        "value": true,
00:32:41.019        "unit": "",
00:32:41.019        "desc": "In verbose mode, user is able to get access to additional advanced FTL properties"
00:32:41.019      },
00:32:41.019      {
00:32:41.019        "name": "prep_upgrade_on_shutdown",
00:32:41.019        "value": false,
00:32:41.019        "unit": "",
00:32:41.019        "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version"
00:32:41.019      }
00:32:41.019    ]
00:32:41.019  }
00:32:41.019    16:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties
00:32:41.019    16:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:32:41.019    16:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length'
00:32:41.278   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0
00:32:41.278   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]]
00:32:41.278    16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties
00:32:41.278    16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length'
00:32:41.278    16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:32:41.278  Validate MD5 checksum, iteration 1
00:32:41.278   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0
00:32:41.278   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]]
00:32:41.278   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum
00:32:41.278   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0
00:32:41.279   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 ))
00:32:41.279   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:32:41.279   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1'
00:32:41.279   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:32:41.279   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:32:41.279   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:32:41.279   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:32:41.279   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:32:41.279   16:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:32:41.538  [2024-12-09 16:43:10.511540] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:32:41.538  [2024-12-09 16:43:10.511879] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85400 ]
00:32:41.538  [2024-12-09 16:43:10.696164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:41.797  [2024-12-09 16:43:10.824695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:43.703  
[2024-12-09T16:43:13.142Z] Copying: 643/1024 [MB] (643 MBps)
[2024-12-09T16:43:15.049Z] Copying: 1024/1024 [MB] (average 637 MBps)
00:32:45.871  
00:32:45.871   16:43:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024
00:32:45.871   16:43:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:32:47.248    16:43:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:32:47.248  Validate MD5 checksum, iteration 2
00:32:47.248   16:43:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a725ec3f50ed6a9dcffe8d7216515fa1
00:32:47.248   16:43:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a725ec3f50ed6a9dcffe8d7216515fa1 != \a\7\2\5\e\c\3\f\5\0\e\d\6\a\9\d\c\f\f\e\8\d\7\2\1\6\5\1\5\f\a\1 ]]
00:32:47.248   16:43:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:32:47.248   16:43:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:32:47.248   16:43:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2'
00:32:47.248   16:43:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:32:47.248   16:43:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:32:47.248   16:43:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:32:47.248   16:43:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:32:47.248   16:43:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:32:47.248   16:43:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:32:47.508  [2024-12-09 16:43:16.509255] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:32:47.508  [2024-12-09 16:43:16.509589] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85467 ]
00:32:47.767  [2024-12-09 16:43:16.694662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:47.767  [2024-12-09 16:43:16.825140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:49.674  
[2024-12-09T16:43:19.421Z] Copying: 639/1024 [MB] (639 MBps)
[2024-12-09T16:43:20.358Z] Copying: 1024/1024 [MB] (average 636 MBps)
00:32:51.180  
00:32:51.438   16:43:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048
00:32:51.438   16:43:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:32:53.343    16:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5bc6ca105cf95e699e753dd66ccae36f
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5bc6ca105cf95e699e753dd66ccae36f != \5\b\c\6\c\a\1\0\5\c\f\9\5\e\6\9\9\e\7\5\3\d\d\6\6\c\c\a\e\3\6\f ]]
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85312 ]]
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85312
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev=
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev=
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85529
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85529
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85529 ']'
00:32:53.343  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:53.343   16:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:32:53.343  [2024-12-09 16:43:22.201194] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:32:53.343  [2024-12-09 16:43:22.201345] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85529 ]
00:32:53.343  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 85312 Killed                  $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg"
00:32:53.343  [2024-12-09 16:43:22.386163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:53.343  [2024-12-09 16:43:22.494795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:32:54.281  [2024-12-09 16:43:23.436965] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:32:54.281  [2024-12-09 16:43:23.437031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:32:54.541  [2024-12-09 16:43:23.582661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.541  [2024-12-09 16:43:23.582850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Check configuration
00:32:54.541  [2024-12-09 16:43:23.582874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:54.541  [2024-12-09 16:43:23.582885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.541  [2024-12-09 16:43:23.582969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.541  [2024-12-09 16:43:23.582982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:32:54.541  [2024-12-09 16:43:23.582994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.040 ms
00:32:54.541  [2024-12-09 16:43:23.583004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.541  [2024-12-09 16:43:23.583034] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache
00:32:54.542  [2024-12-09 16:43:23.584090] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device
00:32:54.542  [2024-12-09 16:43:23.584113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.542  [2024-12-09 16:43:23.584123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:32:54.542  [2024-12-09 16:43:23.584134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.091 ms
00:32:54.542  [2024-12-09 16:43:23.584144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.542  [2024-12-09 16:43:23.584498] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0
00:32:54.542  [2024-12-09 16:43:23.608292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.542  [2024-12-09 16:43:23.608334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Load super block
00:32:54.542  [2024-12-09 16:43:23.608349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 23.832 ms
00:32:54.542  [2024-12-09 16:43:23.608370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.542  [2024-12-09 16:43:23.622687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.542  [2024-12-09 16:43:23.622728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Validate super block
00:32:54.542  [2024-12-09 16:43:23.622741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.027 ms
00:32:54.542  [2024-12-09 16:43:23.622750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.542  [2024-12-09 16:43:23.623234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.542  [2024-12-09 16:43:23.623250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:32:54.542  [2024-12-09 16:43:23.623261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.403 ms
00:32:54.542  [2024-12-09 16:43:23.623271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.542  [2024-12-09 16:43:23.623332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.542  [2024-12-09 16:43:23.623347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:32:54.542  [2024-12-09 16:43:23.623358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.040 ms
00:32:54.542  [2024-12-09 16:43:23.623367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.542  [2024-12-09 16:43:23.623394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.542  [2024-12-09 16:43:23.623406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Register IO device
00:32:54.542  [2024-12-09 16:43:23.623434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.007 ms
00:32:54.542  [2024-12-09 16:43:23.623444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.542  [2024-12-09 16:43:23.623466] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread
00:32:54.542  [2024-12-09 16:43:23.627358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.542  [2024-12-09 16:43:23.627390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:32:54.542  [2024-12-09 16:43:23.627403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 3.903 ms
00:32:54.542  [2024-12-09 16:43:23.627413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.542  [2024-12-09 16:43:23.627450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.542  [2024-12-09 16:43:23.627461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decorate bands
00:32:54.542  [2024-12-09 16:43:23.627471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:32:54.542  [2024-12-09 16:43:23.627481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.542  [2024-12-09 16:43:23.627517] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0
00:32:54.542  [2024-12-09 16:43:23.627542] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes
00:32:54.542  [2024-12-09 16:43:23.627577] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes
00:32:54.542  [2024-12-09 16:43:23.627606] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes
00:32:54.542  [2024-12-09 16:43:23.627694] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes
00:32:54.542  [2024-12-09 16:43:23.627707] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes
00:32:54.542  [2024-12-09 16:43:23.627721] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes
00:32:54.542  [2024-12-09 16:43:23.627734] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity:         20480.00 MiB
00:32:54.542  [2024-12-09 16:43:23.627746] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity:       5120.00 MiB
00:32:54.542  [2024-12-09 16:43:23.627757] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries:                    3774873
00:32:54.542  [2024-12-09 16:43:23.627767] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size:               4
00:32:54.542  [2024-12-09 16:43:23.627777] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages:           2048
00:32:54.542  [2024-12-09 16:43:23.627787] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count            5
00:32:54.542  [2024-12-09 16:43:23.627801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.542  [2024-12-09 16:43:23.627812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize layout
00:32:54.542  [2024-12-09 16:43:23.627822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.287 ms
00:32:54.542  [2024-12-09 16:43:23.627832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.542  [2024-12-09 16:43:23.627912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.542  [2024-12-09 16:43:23.627924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Verify layout
00:32:54.542  [2024-12-09 16:43:23.627935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.063 ms
00:32:54.542  [2024-12-09 16:43:23.627945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.542  [2024-12-09 16:43:23.628043] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout:
00:32:54.542  [2024-12-09 16:43:23.628059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb
00:32:54.542  [2024-12-09 16:43:23.628071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:32:54.542  [2024-12-09 16:43:23.628081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p
00:32:54.542  [2024-12-09 16:43:23.628103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      14.50 MiB
00:32:54.542  [2024-12-09 16:43:23.628123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md
00:32:54.542  [2024-12-09 16:43:23.628132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.62 MiB
00:32:54.542  [2024-12-09 16:43:23.628142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror
00:32:54.542  [2024-12-09 16:43:23.628162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.75 MiB
00:32:54.542  [2024-12-09 16:43:23.628171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md
00:32:54.542  [2024-12-09 16:43:23.628189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.38 MiB
00:32:54.542  [2024-12-09 16:43:23.628198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror
00:32:54.542  [2024-12-09 16:43:23.628216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.50 MiB
00:32:54.542  [2024-12-09 16:43:23.628224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0
00:32:54.542  [2024-12-09 16:43:23.628242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.88 MiB
00:32:54.542  [2024-12-09 16:43:23.628261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:54.542  [2024-12-09 16:43:23.628271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1
00:32:54.542  [2024-12-09 16:43:23.628281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      22.88 MiB
00:32:54.542  [2024-12-09 16:43:23.628289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:54.542  [2024-12-09 16:43:23.628298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2
00:32:54.542  [2024-12-09 16:43:23.628306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      30.88 MiB
00:32:54.542  [2024-12-09 16:43:23.628315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:54.542  [2024-12-09 16:43:23.628324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3
00:32:54.542  [2024-12-09 16:43:23.628333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      38.88 MiB
00:32:54.542  [2024-12-09 16:43:23.628341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:54.542  [2024-12-09 16:43:23.628350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md
00:32:54.542  [2024-12-09 16:43:23.628359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      46.88 MiB
00:32:54.542  [2024-12-09 16:43:23.628367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror
00:32:54.542  [2024-12-09 16:43:23.628385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.00 MiB
00:32:54.542  [2024-12-09 16:43:23.628394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log
00:32:54.542  [2024-12-09 16:43:23.628411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror
00:32:54.542  [2024-12-09 16:43:23.628437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.25 MiB
00:32:54.542  [2024-12-09 16:43:23.628446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628454] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout:
00:32:54.542  [2024-12-09 16:43:23.628465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror
00:32:54.542  [2024-12-09 16:43:23.628475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:32:54.542  [2024-12-09 16:43:23.628484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:54.542  [2024-12-09 16:43:23.628493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap
00:32:54.542  [2024-12-09 16:43:23.628502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      18432.25 MiB
00:32:54.542  [2024-12-09 16:43:23.628511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.88 MiB
00:32:54.542  [2024-12-09 16:43:23.628520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm
00:32:54.542  [2024-12-09 16:43:23.628530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.25 MiB
00:32:54.543  [2024-12-09 16:43:23.628539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      18432.00 MiB
00:32:54.543  [2024-12-09 16:43:23.628549] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc:
00:32:54.543  [2024-12-09 16:43:23.628561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:32:54.543  [2024-12-09 16:43:23.628572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80
00:32:54.543  [2024-12-09 16:43:23.628582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20
00:32:54.543  [2024-12-09 16:43:23.628591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20
00:32:54.543  [2024-12-09 16:43:23.628601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800
00:32:54.543  [2024-12-09 16:43:23.628612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800
00:32:54.543  [2024-12-09 16:43:23.628622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800
00:32:54.543  [2024-12-09 16:43:23.628633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800
00:32:54.543  [2024-12-09 16:43:23.628643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20
00:32:54.543  [2024-12-09 16:43:23.628653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20
00:32:54.543  [2024-12-09 16:43:23.628662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20
00:32:54.543  [2024-12-09 16:43:23.628672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20
00:32:54.543  [2024-12-09 16:43:23.628682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20
00:32:54.543  [2024-12-09 16:43:23.628692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20
00:32:54.543  [2024-12-09 16:43:23.628703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060
00:32:54.543  [2024-12-09 16:43:23.628712] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev:
00:32:54.543  [2024-12-09 16:43:23.628723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:32:54.543  [2024-12-09 16:43:23.628737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:32:54.543  [2024-12-09 16:43:23.628748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000
00:32:54.543  [2024-12-09 16:43:23.628759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0
00:32:54.543  [2024-12-09 16:43:23.628768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0
00:32:54.543  [2024-12-09 16:43:23.628779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.543  [2024-12-09 16:43:23.628789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Layout upgrade
00:32:54.543  [2024-12-09 16:43:23.628799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.802 ms
00:32:54.543  [2024-12-09 16:43:23.628809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.543  [2024-12-09 16:43:23.664956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.543  [2024-12-09 16:43:23.664991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:32:54.543  [2024-12-09 16:43:23.665005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 36.141 ms
00:32:54.543  [2024-12-09 16:43:23.665015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.543  [2024-12-09 16:43:23.665052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.543  [2024-12-09 16:43:23.665063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize band addresses
00:32:54.543  [2024-12-09 16:43:23.665074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.010 ms
00:32:54.543  [2024-12-09 16:43:23.665083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.543  [2024-12-09 16:43:23.709820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.543  [2024-12-09 16:43:23.709855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:32:54.543  [2024-12-09 16:43:23.709868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 44.754 ms
00:32:54.543  [2024-12-09 16:43:23.709878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.543  [2024-12-09 16:43:23.709919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.543  [2024-12-09 16:43:23.709931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:32:54.543  [2024-12-09 16:43:23.709942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:32:54.543  [2024-12-09 16:43:23.709956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.543  [2024-12-09 16:43:23.710117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.543  [2024-12-09 16:43:23.710132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:32:54.543  [2024-12-09 16:43:23.710144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.069 ms
00:32:54.543  [2024-12-09 16:43:23.710153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.543  [2024-12-09 16:43:23.710192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.543  [2024-12-09 16:43:23.710205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:32:54.543  [2024-12-09 16:43:23.710215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.017 ms
00:32:54.543  [2024-12-09 16:43:23.710225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.803  [2024-12-09 16:43:23.730348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.803  [2024-12-09 16:43:23.730383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:32:54.803  [2024-12-09 16:43:23.730397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 20.127 ms
00:32:54.803  [2024-12-09 16:43:23.730410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.803  [2024-12-09 16:43:23.730537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.803  [2024-12-09 16:43:23.730564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize recovery
00:32:54.803  [2024-12-09 16:43:23.730577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.020 ms
00:32:54.803  [2024-12-09 16:43:23.730586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.803  [2024-12-09 16:43:23.765633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.803  [2024-12-09 16:43:23.765805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover band state
00:32:54.803  [2024-12-09 16:43:23.765888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 35.074 ms
00:32:54.803  [2024-12-09 16:43:23.765938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.803  [2024-12-09 16:43:23.779975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.803  [2024-12-09 16:43:23.780135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize P2L checkpointing
00:32:54.803  [2024-12-09 16:43:23.780264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.603 ms
00:32:54.803  [2024-12-09 16:43:23.780304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.803  [2024-12-09 16:43:23.861048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.803  [2024-12-09 16:43:23.861108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore P2L checkpoints
00:32:54.803  [2024-12-09 16:43:23.861125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 80.809 ms
00:32:54.803  [2024-12-09 16:43:23.861135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.803  [2024-12-09 16:43:23.861304] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8
00:32:54.803  [2024-12-09 16:43:23.861424] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9
00:32:54.803  [2024-12-09 16:43:23.861531] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12
00:32:54.803  [2024-12-09 16:43:23.861634] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0
00:32:54.803  [2024-12-09 16:43:23.861647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.803  [2024-12-09 16:43:23.861657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Preprocess P2L checkpoints
00:32:54.803  [2024-12-09 16:43:23.861668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.453 ms
00:32:54.803  [2024-12-09 16:43:23.861680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.803  [2024-12-09 16:43:23.861760] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L
00:32:54.803  [2024-12-09 16:43:23.861775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.803  [2024-12-09 16:43:23.861789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover open bands P2L
00:32:54.803  [2024-12-09 16:43:23.861800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.016 ms
00:32:54.803  [2024-12-09 16:43:23.861809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.803  [2024-12-09 16:43:23.882435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.803  [2024-12-09 16:43:23.882480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover chunk state
00:32:54.803  [2024-12-09 16:43:23.882494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 20.636 ms
00:32:54.803  [2024-12-09 16:43:23.882504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.803  [2024-12-09 16:43:23.895329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.803  [2024-12-09 16:43:23.895362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover max seq ID
00:32:54.803  [2024-12-09 16:43:23.895375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.009 ms
00:32:54.803  [2024-12-09 16:43:23.895385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:54.803  [2024-12-09 16:43:23.895476] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14
00:32:54.803  [2024-12-09 16:43:23.895661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:54.803  [2024-12-09 16:43:23.895672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, prepare
00:32:54.803  [2024-12-09 16:43:23.895683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.187 ms
00:32:54.803  [2024-12-09 16:43:23.895692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.372  [2024-12-09 16:43:24.484708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.372  [2024-12-09 16:43:24.484776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, read vss
00:32:55.372  [2024-12-09 16:43:24.484794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 588.897 ms
00:32:55.372  [2024-12-09 16:43:24.484805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.372  [2024-12-09 16:43:24.490294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.372  [2024-12-09 16:43:24.490337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, persist P2L map
00:32:55.372  [2024-12-09 16:43:24.490350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.094 ms
00:32:55.372  [2024-12-09 16:43:24.490361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.372  [2024-12-09 16:43:24.490864] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14
00:32:55.372  [2024-12-09 16:43:24.490886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.372  [2024-12-09 16:43:24.490909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, close chunk
00:32:55.372  [2024-12-09 16:43:24.490921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.487 ms
00:32:55.372  [2024-12-09 16:43:24.490932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.372  [2024-12-09 16:43:24.490964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.372  [2024-12-09 16:43:24.490978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, cleanup
00:32:55.372  [2024-12-09 16:43:24.490989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:32:55.372  [2024-12-09 16:43:24.491005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.372  [2024-12-09 16:43:24.491040] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 596.530 ms, result 0
00:32:55.372  [2024-12-09 16:43:24.491081] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15
00:32:55.372  [2024-12-09 16:43:24.491157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.372  [2024-12-09 16:43:24.491169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, prepare
00:32:55.372  [2024-12-09 16:43:24.491180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.077 ms
00:32:55.372  [2024-12-09 16:43:24.491189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.095040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.095232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, read vss
00:32:55.942  [2024-12-09 16:43:25.095342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 603.546 ms
00:32:55.942  [2024-12-09 16:43:25.095381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.101135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.101290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, persist P2L map
00:32:55.942  [2024-12-09 16:43:25.101371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.145 ms
00:32:55.942  [2024-12-09 16:43:25.101408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.101939] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15
00:32:55.942  [2024-12-09 16:43:25.102018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.102111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, close chunk
00:32:55.942  [2024-12-09 16:43:25.102148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.552 ms
00:32:55.942  [2024-12-09 16:43:25.102179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.102434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.102478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, cleanup
00:32:55.942  [2024-12-09 16:43:25.102509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:55.942  [2024-12-09 16:43:25.102539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.102603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 612.507 ms, result 0
00:32:55.942  [2024-12-09 16:43:25.102883] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2
00:32:55.942  [2024-12-09 16:43:25.102958] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully
00:32:55.942  [2024-12-09 16:43:25.103020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.103109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover open chunks P2L
00:32:55.942  [2024-12-09 16:43:25.103145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1209.526 ms
00:32:55.942  [2024-12-09 16:43:25.103175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.103234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.103274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize recovery
00:32:55.942  [2024-12-09 16:43:25.103352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:32:55.942  [2024-12-09 16:43:25.103437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.114507] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB
00:32:55.942  [2024-12-09 16:43:25.114767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.114786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize L2P
00:32:55.942  [2024-12-09 16:43:25.114798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 11.300 ms
00:32:55.942  [2024-12-09 16:43:25.114809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.115412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.115429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore L2P from shared memory
00:32:55.942  [2024-12-09 16:43:25.115445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.525 ms
00:32:55.942  [2024-12-09 16:43:25.115456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.117421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.117436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore valid maps counters
00:32:55.942  [2024-12-09 16:43:25.117447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.949 ms
00:32:55.942  [2024-12-09 16:43:25.117456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.117495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.117507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Complete trim transaction
00:32:55.942  [2024-12-09 16:43:25.117518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:32:55.942  [2024-12-09 16:43:25.117532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.117626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.117638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize band initialization
00:32:55.942  [2024-12-09 16:43:25.117648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.016 ms
00:32:55.942  [2024-12-09 16:43:25.117658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.117678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.117689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Start core poller
00:32:55.942  [2024-12-09 16:43:25.117699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.005 ms
00:32:55.942  [2024-12-09 16:43:25.117709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.942  [2024-12-09 16:43:25.117742] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped
00:32:55.942  [2024-12-09 16:43:25.117754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.942  [2024-12-09 16:43:25.117764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Self test on startup
00:32:55.942  [2024-12-09 16:43:25.117773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.013 ms
00:32:55.942  [2024-12-09 16:43:25.117783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:55.943  [2024-12-09 16:43:25.117832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:55.943  [2024-12-09 16:43:25.117844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize initialization
00:32:55.943  [2024-12-09 16:43:25.117854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.028 ms
00:32:55.943  [2024-12-09 16:43:25.117864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:56.202  [2024-12-09 16:43:25.118978] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1538.395 ms, result 0
00:32:56.202  [2024-12-09 16:43:25.131427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:56.202  [2024-12-09 16:43:25.147402] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000
00:32:56.202  [2024-12-09 16:43:25.156803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 ))
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1'
00:32:56.202  Validate MD5 checksum, iteration 1
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:32:56.202   16:43:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:32:56.202  [2024-12-09 16:43:25.294450] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:32:56.202  [2024-12-09 16:43:25.294740] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85570 ]
00:32:56.461  [2024-12-09 16:43:25.472567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:56.461  [2024-12-09 16:43:25.603456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:58.367  
[2024-12-09T16:43:28.115Z] Copying: 647/1024 [MB] (647 MBps)
[2024-12-09T16:43:30.025Z] Copying: 1024/1024 [MB] (average 646 MBps)
00:33:00.846  
00:33:00.846   16:43:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024
00:33:00.846   16:43:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:33:02.882    16:43:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:33:02.882  Validate MD5 checksum, iteration 2
00:33:02.882   16:43:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a725ec3f50ed6a9dcffe8d7216515fa1
00:33:02.882   16:43:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a725ec3f50ed6a9dcffe8d7216515fa1 != \a\7\2\5\e\c\3\f\5\0\e\d\6\a\9\d\c\f\f\e\8\d\7\2\1\6\5\1\5\f\a\1 ]]
00:33:02.882   16:43:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:33:02.882   16:43:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:33:02.882   16:43:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2'
00:33:02.882   16:43:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:33:02.882   16:43:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:33:02.882   16:43:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:33:02.882   16:43:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:33:02.882   16:43:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:33:02.882   16:43:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:33:02.882  [2024-12-09 16:43:31.789108] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:33:02.882  [2024-12-09 16:43:31.789466] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85652 ]
00:33:02.882  [2024-12-09 16:43:31.973237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:03.141  [2024-12-09 16:43:32.100281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:33:05.047  
[2024-12-09T16:43:34.486Z] Copying: 650/1024 [MB] (650 MBps)
[2024-12-09T16:43:35.866Z] Copying: 1024/1024 [MB] (average 652 MBps)
00:33:06.687  
00:33:06.687   16:43:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048
00:33:06.687   16:43:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:33:08.594    16:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5bc6ca105cf95e699e753dd66ccae36f
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5bc6ca105cf95e699e753dd66ccae36f != \5\b\c\6\c\a\1\0\5\c\f\9\5\e\6\9\9\e\7\5\3\d\d\6\6\c\c\a\e\3\6\f ]]
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85529 ]]
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85529
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85529 ']'
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85529
00:33:08.594    16:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:08.594    16:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85529
00:33:08.594  killing process with pid 85529
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85529'
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85529
00:33:08.594   16:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85529
00:33:09.532  [2024-12-09 16:43:38.529588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000
00:33:09.532  [2024-12-09 16:43:38.549342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.532  [2024-12-09 16:43:38.549380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinit core IO channel
00:33:09.532  [2024-12-09 16:43:38.549401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:33:09.532  [2024-12-09 16:43:38.549411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.532  [2024-12-09 16:43:38.549432] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread
00:33:09.532  [2024-12-09 16:43:38.553481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.532  [2024-12-09 16:43:38.553518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Unregister IO device
00:33:09.532  [2024-12-09 16:43:38.553529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 4.040 ms
00:33:09.532  [2024-12-09 16:43:38.553539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.532  [2024-12-09 16:43:38.553734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.532  [2024-12-09 16:43:38.553747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Stop core poller
00:33:09.532  [2024-12-09 16:43:38.553759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.168 ms
00:33:09.532  [2024-12-09 16:43:38.553769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.532  [2024-12-09 16:43:38.555116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.532  [2024-12-09 16:43:38.555152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist L2P
00:33:09.532  [2024-12-09 16:43:38.555164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.333 ms
00:33:09.532  [2024-12-09 16:43:38.555179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.532  [2024-12-09 16:43:38.556048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.532  [2024-12-09 16:43:38.556248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finish L2P trims
00:33:09.532  [2024-12-09 16:43:38.556268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.835 ms
00:33:09.532  [2024-12-09 16:43:38.556279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.532  [2024-12-09 16:43:38.570548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.532  [2024-12-09 16:43:38.570585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist NV cache metadata
00:33:09.532  [2024-12-09 16:43:38.570604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 14.251 ms
00:33:09.532  [2024-12-09 16:43:38.570615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.532  [2024-12-09 16:43:38.578164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.532  [2024-12-09 16:43:38.578329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist valid map metadata
00:33:09.532  [2024-12-09 16:43:38.578350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7.525 ms
00:33:09.532  [2024-12-09 16:43:38.578361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.532  [2024-12-09 16:43:38.578450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.532  [2024-12-09 16:43:38.578463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist P2L metadata
00:33:09.532  [2024-12-09 16:43:38.578474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.049 ms
00:33:09.533  [2024-12-09 16:43:38.578491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.533  [2024-12-09 16:43:38.593176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.533  [2024-12-09 16:43:38.593317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist band info metadata
00:33:09.533  [2024-12-09 16:43:38.593433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 14.682 ms
00:33:09.533  [2024-12-09 16:43:38.593470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.533  [2024-12-09 16:43:38.608028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.533  [2024-12-09 16:43:38.608181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist trim metadata
00:33:09.533  [2024-12-09 16:43:38.608301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 14.453 ms
00:33:09.533  [2024-12-09 16:43:38.608318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.533  [2024-12-09 16:43:38.622853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.533  [2024-12-09 16:43:38.622888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist superblock
00:33:09.533  [2024-12-09 16:43:38.622912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 14.521 ms
00:33:09.533  [2024-12-09 16:43:38.622922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.533  [2024-12-09 16:43:38.637358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.533  [2024-12-09 16:43:38.637501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL clean state
00:33:09.533  [2024-12-09 16:43:38.637538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 14.388 ms
00:33:09.533  [2024-12-09 16:43:38.637548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.533  [2024-12-09 16:43:38.637609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity:
00:33:09.533  [2024-12-09 16:43:38.637627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:33:09.533  [2024-12-09 16:43:38.637641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   2:   261120 / 261120 	wr_cnt: 1	state: closed
00:33:09.533  [2024-12-09 16:43:38.637653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   3:     2048 / 261120 	wr_cnt: 1	state: closed
00:33:09.533  [2024-12-09 16:43:38.637665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:33:09.533  [2024-12-09 16:43:38.637829] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 
00:33:09.533  [2024-12-09 16:43:38.637839] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID:         8acfe7ce-843d-4f42-a2af-cd7dd442e3e7
00:33:09.533  [2024-12-09 16:43:38.637851] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs:    524288
00:33:09.533  [2024-12-09 16:43:38.637861] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes:        320
00:33:09.533  [2024-12-09 16:43:38.637871] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes:         0
00:33:09.533  [2024-12-09 16:43:38.637881] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF:                 inf
00:33:09.533  [2024-12-09 16:43:38.637891] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits:
00:33:09.533  [2024-12-09 16:43:38.637916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   crit: 0
00:33:09.533  [2024-12-09 16:43:38.637932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   high: 0
00:33:09.533  [2024-12-09 16:43:38.637942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]    low: 0
00:33:09.533  [2024-12-09 16:43:38.637952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]  start: 0
00:33:09.533  [2024-12-09 16:43:38.637963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.533  [2024-12-09 16:43:38.637975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Dump statistics
00:33:09.533  [2024-12-09 16:43:38.637987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.355 ms
00:33:09.533  [2024-12-09 16:43:38.637998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.533  [2024-12-09 16:43:38.657658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.533  [2024-12-09 16:43:38.657693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize L2P
00:33:09.533  [2024-12-09 16:43:38.657706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 19.660 ms
00:33:09.533  [2024-12-09 16:43:38.657717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.533  [2024-12-09 16:43:38.658279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:33:09.533  [2024-12-09 16:43:38.658293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize P2L checkpointing
00:33:09.533  [2024-12-09 16:43:38.658303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.535 ms
00:33:09.533  [2024-12-09 16:43:38.658313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.722878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.793  [2024-12-09 16:43:38.722925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:33:09.793  [2024-12-09 16:43:38.722938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.793  [2024-12-09 16:43:38.722954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.722984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.793  [2024-12-09 16:43:38.722995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:33:09.793  [2024-12-09 16:43:38.723006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.793  [2024-12-09 16:43:38.723016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.723114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.793  [2024-12-09 16:43:38.723129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:33:09.793  [2024-12-09 16:43:38.723138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.793  [2024-12-09 16:43:38.723148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.723170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.793  [2024-12-09 16:43:38.723182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:33:09.793  [2024-12-09 16:43:38.723191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.793  [2024-12-09 16:43:38.723201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.844664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.793  [2024-12-09 16:43:38.844909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:33:09.793  [2024-12-09 16:43:38.844932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.793  [2024-12-09 16:43:38.844944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.941504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.793  [2024-12-09 16:43:38.941547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:33:09.793  [2024-12-09 16:43:38.941562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.793  [2024-12-09 16:43:38.941573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.941676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.793  [2024-12-09 16:43:38.941689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:33:09.793  [2024-12-09 16:43:38.941700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.793  [2024-12-09 16:43:38.941711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.941755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.793  [2024-12-09 16:43:38.941781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:33:09.793  [2024-12-09 16:43:38.941791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.793  [2024-12-09 16:43:38.941801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.942142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.793  [2024-12-09 16:43:38.942190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:33:09.793  [2024-12-09 16:43:38.942223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.793  [2024-12-09 16:43:38.942254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.942329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.793  [2024-12-09 16:43:38.942365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize superblock
00:33:09.793  [2024-12-09 16:43:38.942402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.793  [2024-12-09 16:43:38.942431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.942490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.793  [2024-12-09 16:43:38.942614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:33:09.793  [2024-12-09 16:43:38.942627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.793  [2024-12-09 16:43:38.942638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.793  [2024-12-09 16:43:38.942679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:33:09.794  [2024-12-09 16:43:38.942696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:33:09.794  [2024-12-09 16:43:38.942706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:33:09.794  [2024-12-09 16:43:38.942716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:33:09.794  [2024-12-09 16:43:38.942831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 394.098 ms, result 0
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]]
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files
00:33:11.171  Remove shared memory files
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85312
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f
00:33:11.171  
00:33:11.171  real	1m25.145s
00:33:11.171  user	1m54.906s
00:33:11.171  sys	0m24.714s
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:11.171  ************************************
00:33:11.171  END TEST ftl_upgrade_shutdown
00:33:11.171  ************************************
00:33:11.171   16:43:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:33:11.171   16:43:40 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]]
00:33:11.171   16:43:40 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit
00:33:11.171   16:43:40 ftl -- ftl/ftl.sh@14 -- # killprocess 77773
00:33:11.171   16:43:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 77773 ']'
00:33:11.171   16:43:40 ftl -- common/autotest_common.sh@958 -- # kill -0 77773
00:33:11.171  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77773) - No such process
00:33:11.171  Process with pid 77773 is not found
00:33:11.171   16:43:40 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77773 is not found'
00:33:11.171   16:43:40 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]]
00:33:11.171   16:43:40 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85775
00:33:11.171   16:43:40 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:33:11.171   16:43:40 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85775
00:33:11.171   16:43:40 ftl -- common/autotest_common.sh@835 -- # '[' -z 85775 ']'
00:33:11.171  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:33:11.171   16:43:40 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:33:11.171   16:43:40 ftl -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:11.171   16:43:40 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:33:11.171   16:43:40 ftl -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:11.171   16:43:40 ftl -- common/autotest_common.sh@10 -- # set +x
00:33:11.431  [2024-12-09 16:43:40.427617] Starting SPDK v25.01-pre git sha1 6584139bf / DPDK 24.03.0 initialization...
00:33:11.431  [2024-12-09 16:43:40.427750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85775 ]
00:33:11.690  [2024-12-09 16:43:40.613814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:11.690  [2024-12-09 16:43:40.719509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:33:12.629   16:43:41 ftl -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:12.629   16:43:41 ftl -- common/autotest_common.sh@868 -- # return 0
00:33:12.629   16:43:41 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:33:12.889  nvme0n1
00:33:12.889   16:43:41 ftl -- ftl/ftl.sh@22 -- # clear_lvols
00:33:12.889    16:43:41 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:33:12.889    16:43:41 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:33:12.889   16:43:42 ftl -- ftl/common.sh@28 -- # stores=96bf0b95-3767-4236-a090-a4db8771635f
00:33:12.889   16:43:42 ftl -- ftl/common.sh@29 -- # for lvs in $stores
00:33:12.889   16:43:42 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 96bf0b95-3767-4236-a090-a4db8771635f
00:33:13.148   16:43:42 ftl -- ftl/ftl.sh@23 -- # killprocess 85775
00:33:13.148   16:43:42 ftl -- common/autotest_common.sh@954 -- # '[' -z 85775 ']'
00:33:13.148   16:43:42 ftl -- common/autotest_common.sh@958 -- # kill -0 85775
00:33:13.148    16:43:42 ftl -- common/autotest_common.sh@959 -- # uname
00:33:13.148   16:43:42 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:13.148    16:43:42 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85775
00:33:13.148  killing process with pid 85775
00:33:13.148   16:43:42 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:33:13.148   16:43:42 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:33:13.148   16:43:42 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85775'
00:33:13.148   16:43:42 ftl -- common/autotest_common.sh@973 -- # kill 85775
00:33:13.148   16:43:42 ftl -- common/autotest_common.sh@978 -- # wait 85775
00:33:15.686   16:43:44 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:33:15.686  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:33:15.946  Waiting for block devices as requested
00:33:15.946  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:33:16.207  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:33:16.207  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:33:16.466  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:33:21.742  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:33:21.742   16:43:50 ftl -- ftl/ftl.sh@28 -- # remove_shm
00:33:21.742   16:43:50 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files
00:33:21.742  Remove shared memory files
00:33:21.742   16:43:50 ftl -- ftl/common.sh@205 -- # rm -f rm -f
00:33:21.742   16:43:50 ftl -- ftl/common.sh@206 -- # rm -f rm -f
00:33:21.742   16:43:50 ftl -- ftl/common.sh@207 -- # rm -f rm -f
00:33:21.742   16:43:50 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:33:21.742   16:43:50 ftl -- ftl/common.sh@209 -- # rm -f rm -f
00:33:21.742  
00:33:21.742  real	11m50.011s
00:33:21.742  user	14m28.034s
00:33:21.742  sys	1m33.343s
00:33:21.742  ************************************
00:33:21.742  END TEST ftl
00:33:21.742  ************************************
00:33:21.742   16:43:50 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:21.742   16:43:50 ftl -- common/autotest_common.sh@10 -- # set +x
00:33:21.742   16:43:50  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:33:21.742   16:43:50  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:33:21.742   16:43:50  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:33:21.742   16:43:50  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:33:21.742   16:43:50  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:33:21.742   16:43:50  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:33:21.742   16:43:50  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:33:21.742   16:43:50  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:33:21.742   16:43:50  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:33:21.742   16:43:50  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:33:21.742   16:43:50  -- common/autotest_common.sh@726 -- # xtrace_disable
00:33:21.742   16:43:50  -- common/autotest_common.sh@10 -- # set +x
00:33:21.742   16:43:50  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:33:21.742   16:43:50  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:33:21.742   16:43:50  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:33:21.742   16:43:50  -- common/autotest_common.sh@10 -- # set +x
00:33:24.281  INFO: APP EXITING
00:33:24.281  INFO: killing all VMs
00:33:24.281  INFO: killing vhost app
00:33:24.281  INFO: EXIT DONE
00:33:24.541  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:33:25.109  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:33:25.109  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:33:25.109  0000:00:12.0 (1b36 0010): Already using the nvme driver
00:33:25.109  0000:00:13.0 (1b36 0010): Already using the nvme driver
00:33:25.678  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:33:25.937  Cleaning
00:33:25.938  Removing:    /var/run/dpdk/spdk0/config
00:33:25.938  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:33:25.938  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:33:25.938  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:33:25.938  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:33:25.938  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:33:25.938  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:33:25.938  Removing:    /var/run/dpdk/spdk0
00:33:25.938  Removing:    /var/run/dpdk/spdk_pid58702
00:33:25.938  Removing:    /var/run/dpdk/spdk_pid58937
00:33:25.938  Removing:    /var/run/dpdk/spdk_pid59169
00:33:25.938  Removing:    /var/run/dpdk/spdk_pid59277
00:33:25.938  Removing:    /var/run/dpdk/spdk_pid59329
00:33:25.938  Removing:    /var/run/dpdk/spdk_pid59468
00:33:25.938  Removing:    /var/run/dpdk/spdk_pid59486
00:33:25.938  Removing:    /var/run/dpdk/spdk_pid59696
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid59802
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid59915
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid60037
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid60146
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid60185
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid60222
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid60298
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid60404
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid60851
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid60928
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61002
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61018
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61177
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61193
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61341
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61368
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61432
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61456
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61525
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61543
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61738
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61775
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid61864
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid62058
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid62153
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid62195
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid62646
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid62749
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid62864
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid62917
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid62942
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid63021
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid63669
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid63717
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid64204
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid64302
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid64422
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid64475
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid64500
00:33:26.197  Removing:    /var/run/dpdk/spdk_pid64526
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66421
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66559
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66573
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66585
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66632
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66636
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66648
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66698
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66702
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66714
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66764
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66768
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid66780
00:33:26.198  Removing:    /var/run/dpdk/spdk_pid68212
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid68320
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid69752
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid71499
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid71573
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid71654
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid71770
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid71862
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid71963
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid72043
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid72123
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid72235
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid72327
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid72429
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid72509
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid72590
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid72698
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid72792
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid72900
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid72975
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid73056
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid73162
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid73262
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid73359
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid73433
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid73517
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid73599
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid73673
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid73793
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid73887
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid73987
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid74068
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid74142
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid74223
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid74299
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid74407
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid74498
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid74653
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid74948
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid74985
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid75443
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid75629
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid75733
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid75849
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid75907
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid75934
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid76230
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid76301
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid76387
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid76814
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid76960
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid77773
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid77923
00:33:26.460  Removing:    /var/run/dpdk/spdk_pid78130
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid78238
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid78591
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid78878
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid79242
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid79452
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid79606
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid79670
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid79813
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid79849
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid79913
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid80125
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid80363
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid80836
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid81295
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid81791
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid82345
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid82497
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid82584
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid83238
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid83312
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid83809
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid84222
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid84759
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid84887
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid84940
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid85000
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid85057
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid85121
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid85312
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid85400
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid85467
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid85529
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid85570
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid85652
00:33:26.720  Removing:    /var/run/dpdk/spdk_pid85775
00:33:26.720  Clean
00:33:26.720   16:43:55  -- common/autotest_common.sh@1453 -- # return 0
00:33:26.720   16:43:55  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:33:26.720   16:43:55  -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:26.720   16:43:55  -- common/autotest_common.sh@10 -- # set +x
00:33:26.980   16:43:55  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:33:26.980   16:43:55  -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:26.980   16:43:55  -- common/autotest_common.sh@10 -- # set +x
00:33:26.980   16:43:56  -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:33:26.980   16:43:56  -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:33:26.980   16:43:56  -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:33:26.980   16:43:56  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:33:26.980    16:43:56  -- spdk/autotest.sh@398 -- # hostname
00:33:26.980   16:43:56  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:33:27.239  geninfo: WARNING: invalid characters removed from testname!
00:33:53.796   16:44:22  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:33:56.332   16:44:25  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:33:58.238   16:44:27  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:34:00.146   16:44:29  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:34:02.053   16:44:31  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:34:04.592   16:44:33  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:34:06.501   16:44:35  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:34:06.501   16:44:35  -- spdk/autorun.sh@1 -- $ timing_finish
00:34:06.501   16:44:35  -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]]
00:34:06.501   16:44:35  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:34:06.501   16:44:35  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:34:06.501   16:44:35  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:34:06.501  + [[ -n 5241 ]]
00:34:06.501  + sudo kill 5241
00:34:06.511  [Pipeline] }
00:34:06.526  [Pipeline] // timeout
00:34:06.531  [Pipeline] }
00:34:06.544  [Pipeline] // stage
00:34:06.549  [Pipeline] }
00:34:06.562  [Pipeline] // catchError
00:34:06.571  [Pipeline] stage
00:34:06.573  [Pipeline] { (Stop VM)
00:34:06.585  [Pipeline] sh
00:34:06.869  + vagrant halt
00:34:10.164  ==> default: Halting domain...
00:34:16.807  [Pipeline] sh
00:34:17.089  + vagrant destroy -f
00:34:19.626  ==> default: Removing domain...
00:34:20.207  [Pipeline] sh
00:34:20.490  + mv output /var/jenkins/workspace/nvme-vg-autotest/output
00:34:20.501  [Pipeline] }
00:34:20.515  [Pipeline] // stage
00:34:20.520  [Pipeline] }
00:34:20.533  [Pipeline] // dir
00:34:20.539  [Pipeline] }
00:34:20.553  [Pipeline] // wrap
00:34:20.560  [Pipeline] }
00:34:20.572  [Pipeline] // catchError
00:34:20.581  [Pipeline] stage
00:34:20.584  [Pipeline] { (Epilogue)
00:34:20.597  [Pipeline] sh
00:34:20.881  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:34:26.163  [Pipeline] catchError
00:34:26.164  [Pipeline] {
00:34:26.173  [Pipeline] sh
00:34:26.452  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:34:26.452  Artifacts sizes are good
00:34:26.461  [Pipeline] }
00:34:26.475  [Pipeline] // catchError
00:34:26.485  [Pipeline] archiveArtifacts
00:34:26.492  Archiving artifacts
00:34:26.603  [Pipeline] cleanWs
00:34:26.615  [WS-CLEANUP] Deleting project workspace...
00:34:26.615  [WS-CLEANUP] Deferred wipeout is used...
00:34:26.622  [WS-CLEANUP] done
00:34:26.624  [Pipeline] }
00:34:26.638  [Pipeline] // stage
00:34:26.644  [Pipeline] }
00:34:26.657  [Pipeline] // node
00:34:26.663  [Pipeline] End of Pipeline
00:34:26.702  Finished: SUCCESS